AI & Warfare: A New Era for Arms Control & Deterrence, with Paul Scharre

[ad_1]

Then in the late 2000s some of that started to seem more real as we saw autonomy in robotics become more mature. I experienced a little bit of this in the military where I had been deployed and seen robots being used in Iraq. Certainly drones were just becoming relevant at that time, so it seemed to me that there was this moment when the technology was becoming more powerful and exactly what that was going to look like was very unclear to me at least but it was going to have really important implications for not just the United States but countries all the world, so I looked for opportunities to then lean into that work whenever possible.

ANJA KASPERSEN: This is a great segue to your latest book, Four Battlegrounds: Power in the Age of Artificial Intelligence, where you outline your thoughts on how technology is changing the concept of power, what it means, the levers of power, and not least who holds power in this new context and what are the different domains in which we see this new power and these new shapes of power being exercised. Can you tell us more about your book and also your core theses that run through the book?

PAUL SCHARRE: The book explores, as you said, how artificial intelligence is changing global power. AI is a general-purpose technology with a wide range of applications. As we saw with previous general-purpose technologies like electricity or the internal combustion engine they not only had a wide range of economic applications but in fact led to these major societal and geopolitical transformations in the First and Second Industrial Revolutions.

During the Industrial Revolution nations rose and fell on the global stage based on how rapidly they industrialized, but also the key metrics of power changed. Coal and steel production became key inputs of national power. Oil became a geostrategic resource. We had countries that wanted to fight wars over it. The question that motived me was, what is that in the age of AI, not only who is best positioned to succeed in an age of AI and to harness this technology, but also what are the things that we should be measuring? What are the key inputs of AI power?

What the book concludes is that there are these four key battlegrounds—data, computing hardware, human talent, and institutions, the organizations that are needed to take these raw inputs of data, computer chips, and human talent, and then turn them into useful AI applications. Whoever is able to lead in those areas is going to be best positioned to lead in an AI-driven future.

ANJA KASPERSEN: Are there historical examples that you draw upon in this book to take us—you mentioned electricity and public utilities, but the technologies we are talking about are more about those who hold the power over them, those who develop them, and they are not public utility companies although it can be argued that these technologies they have developed have become part of our core public infrastructure. Where do you see history play into this and where do you view that notion of whether or not it is a public utility in that context and also thinking about a warfighting context?

PAUL SCHARRE: I think there are lots of historical comparisons, all of which have value. Certainly there are some parallels to the First and Second Industrial Revolutions but also to the nuclear age, to the Space Race, and to the beginnings of the Internet. There are elements that you can take your pick of in each of those as different lessons to draw upon.

Maybe one relevant comparison is that during the Cold War the Space Race had an element of this very intense geopolitical competition in a new technology that was very dual use, that had military applications certainly that the U.S. and the USSR were interested in, but also lots of civilian and commercial applications. One big difference is that that big push for investment into space was led by governments and this is being driven by the private sector.

That is a huge difference in a variety of ways. Certainly it shapes how the technology is being developed and how we think about regulating it in that this is being driven by the private sector, in fact not just by the private sector in general but by a handful of companies at the leading edge. It also I think shapes how the government is trying to adapt this technology for national security purposes for the military or the intelligence community because they cannot just go out and grab AI. They have to find ways to import it into, say, the defense or intelligence community, and that is not very easy for the government to do, and that has been a big challenge as well.

I think that has big implications for how we think about the geopolitical dimensions of AI because it is not just who is leading nationally and which companies are in the lead. It is also how fast are those national institutions able to then import AI into, say, their military to be able to use that technology.

ANJA KASPERSEN: You mentioned dual use, which is an interesting concept. If you study war and especially the military-industrial complex, which you have for many years and you have been involved in this space on the government side, historically that was a term that was often used when public money was used for military innovation purposes and you needed to demonstrate a civilian use side of it, so you talked about dual-use technologies.

But as you said currently many of these technologies are being developed in the commercial domain without that same burden of proof to demonstrate it has safe military use areas. Yet militaries around the world find themselves at the receiving end and having to procure most of these technologies without maybe the same due diligence and the same public criteria that would be applied in the past. Where do you see this and what does this transformation change mean for the larger complex of national security?

PAUL SCHARRE: There historically has been this flow of technology from the defense community out into the commercial space, and we are seeing in this case that this is reversed. That is a big challenge actually for militaries because their institutions, their procurement processes and acquisition systems, are not designed to flow in that direction.

It is part of this broader shift we have seen over the last 60 years or so, where if you go back to the 1960s the U.S. government in particular dominated technology innovation. Within the United States the federal government controlled over two-thirds of research and development in the United States, and globally the United States controlled about 70 percent of global research and development. The combined effect of that was that the Defense Department alone controlled 36 percent of global R&D spending in the entire world, so the Defense Department could drive technological innovation and say, “We think rockets are important,” for example, and that was a major driver of then how technology evolved globally.

That is totally changed today. Within the United States the government share of R&D spending has shrunk dramatically and the private sector has taken up the slack and the roles of government and private sector have flipped, and globally the U.S. share has fallen considerably through globalization. So now the Defense Department share of global R&D has shrunk from 36 percent in 1960 to 3 percent today. It is much, much smaller.

The Defense Department really is not then prepared for a world where they have to import all of this commercial technology. They don’t know how to do that very effectively. They have been working on it for the last couple of years creating organizations like the Defense Innovation Unit, their outpost out in Silicon Valley, and other types of organizations across the U.S. military, but it remains a pretty significant challenge for them, and there is a lot of fear in the U.S. military that they are going to fall behind competitors like China, who might have the ability to import the same commercial technologies that frankly Chinese tech companies also have access to.

ANJA KASPERSEN: So procurement has become a strategic issue.

PAUL SCHARRE: It is. Certainly the rising cost of major weapons systems and platforms—bombers and ships—has been a big problem for the U.S. Defense Department over the last several decades, that the costs of these systems has become so astronomical that it shrinks the quantities that the U.S. military can buy, sometimes to absurd levels. The United States bought three DDG 1000 destroyers, the next-generation destroyer, for a total cost of $10 billion. It is just not very useful to have only three ships of a particular kind of class. Yet this is so much worse because they do not have good avenues to bring in commercial tech. It has been a big challenge.

ANJA KASPERSEN: In your book, and I really appreciate this point, you carefully avoid using the term “arms race.” Rather you refer to the fact that the powers that are and powers that be and you use the phrase “are locked in a race to lead in AI and write the rules of the next century to control the future of global power and security.” It is a very interesting phrase because it is of course a reflection of what we are seeing. It may not be a race toward the classical escalatory behavior that we are familiar with, although there is an element of that as well, and certainly AI can lead to some of these escalatory patterns, but that those who write the rules also lead in this field. Can you elaborate on this point, which actually comes through throughout your book I would say?

PAUL SCHARRE: This meme, if you will, of an AI race or AI arms race is one that is always very contentious. It is interesting to me because the term “AI arms race” gets thrown around a lot casually, sometimes by AI experts who are warning of the dangers of an AI arms race and are saying, “We shouldn’t have an AI arms race; that’s bad.” Sometimes I have heard it from Defense officials saying, “We are in an AI arms race. We need to win it.”

To be a little bit pedantic and like that boring expert guy, I will say that we are not in an arms race. There is certainly a technology competition underway that is intense geopolitically. It is intense among the tech companies. The tech companies themselves are racing ahead to spend enormous amounts of money on AI to build next-generation systems, and it is not just confined to the United States. I think there is a strong geopolitical dimension between the United States, China, and Europe as at least the three big centers of gravity to lead in AI and to regulate AI.

But if you just look at the arms dimension, just the defense spending, it is tiny. They are saying a lot, but if you actually look at the spending the best independent estimates are that U.S. military is spending about 1 percent of its budget on AI. That is not an arms race. That is not even a priority. That is literally minimal. That is an important distinction to make because if you listen to what sometimes defense leaders are saying—they talk about AI a lot—but if you look at what they are doing, it is not a priority in practice.

ANJA KASPERSEN: What do you attribute that to?

PAUL SCHARRE: I think it is a combination. The defense budget is driven not by strategy but by inertia more than anything, and the defense budget mostly looks like what it was in the years prior. Some of that is because it is very challenging to make rapid adjustments in things like shipbuilding or building aircraft, programs that exist over decades, long time horizons, so it is typically very hard to squeeze in space for new types of technologies, and AI is one of them, combine that with the fact that it is coming out of the commercial sector, which is harder for the military to adopt, and then I think there is this sense too of, “What do you do with AI?” People are excited about it, but how do they use it effectively? What is the big game-changing application for AI? I think that is actually still a little bit unclear in the military space.

ANJA KASPERSEN: We often talk about AI as this monolith on its own. Meredith Whittaker calls it a “marketing term.” It is a term that is meant to describe a lot of features that we embed into existing structures, but those features, those systems, are of course built on something very important, which is data and our ability to transmit data, the speed with which we transmit data, and the speed with which we have uptake of that data and use it in our decision-making processes, which speaks to a much larger infrastructure that you also touch upon.

Transoceanic cables are not very sexy to discuss, but most of our digital traffic travels under the sea. We know that most commercial satellites are privately owned. Most transoceanic cables are privately owned, increasingly by that same handful of companies you alluded to earlier. How do these facets—added on top of what you just spoke about in terms of where we are, in terms of our innovation spending, who is driving what gets invested in, and who is driving the priorities of R&D—conflate and impact the bigger international security picture as you see it?

PAUL SCHARRE: I think this infrastructure you are talking about—data, computing infrastructure, chips, data centers, undersea cables, and satellite connectivity that links all of this together—is essential when you are thinking about AI governance because that is where the rubber meets the road in terms of who is able to get access to this technology, how they use it, and how they build it.

On one end of the competition for governance you have what the European Union is doing, where they are getting out in front of governing technologies with the General Data Protection Regulation (GDPR) for data and certainly the AI Act with AI and this idea of the “Brussels effect,” of Europe and Brussels leading and becoming the global de facto standard for a lot of technology regulation.

I think there is a lot to be said for that. I think it’s clear, in fact we saw from GDPR, whether you like it or not, that it ended up being this de facto standard that multinational companies have to comply with in Europe and is going to drive compliance elsewhere. Even others in China, the United States, and elsewhere, when they are thinking about their policies, they are like, “Okay, here is an example, a starting point to work from.”

But I do think it is incredibly important who is controlling the physical infrastructure. That matters a lot. We have seen, for example, instances of China laying the foundations of telecommunications infrastructure with Chinese companies like Huawei or ZTE around the globe, in Africa, Southeast Asia, Latin America, and even in Europe, and some countries are concerned about some of technology in their wireless networks as they are building out 5G infrastructure, for example.

I think that is true across the board for AI when we are talking about data centers, fiberoptics, and chips. We are seeing elements of that competition play out now in AI with the U.S. export controls on the most advanced graphics processing units (GPUs) to China. I think we are at the early stages of that competition and some of that is going to play out with the private sector with companies jockeying for the position of who is going be building up data centers around the world? Some of that is going to be led by governments that are going to be weighing in in terms of industrial policy or export controls.

I think there is a need for governments and industry to work together and to try to be thoughtful and deliberate. There are going to be concerns that we would like to see come to bear to ensure that the infrastructure is one that is going to enable governance that protects privacy, civil liberties, and human rights, and ensures that we are not laying in the foundations at the technology level for governance that is going to be abusive or misused.

ANJA KASPERSEN: Your first book Army of None, which came out six years ago, revolved around issues of autonomous weapons systems, what they are, what they represent for military command-and-control structures, and real and perceived capabilities—you spoke about the arms race analogy earlier, how we respond to perceived capabilities being as important maybe even as real capabilities—how it shapes paradigms perhaps more than we realize, the limits of classical notions of deterrence in this space, and of course the critical importance of human decision-making processes and skills required to operate in this space.

Can you talk a bit more about that book and writing it because you were also trying to come with helpful input to a much larger global debate that was going on at the time and continues to this very day—how do we grapple with these new features that can be added to weapons systems? For our listeners, can you tell us a little bit about your book, where do you see this debate having moved since, and where we are today?

PAUL SCHARRE: You have also been a part of this debate from the very beginning. I think we have been part of this journey together as we have seen this debate evolve globally.

I will start about 15 years ago in the, say, 2009 timeframe, when there was this awakening inside the U.S. Defense Department. We had this accidental robotics revolution in Iraq and Afghanistan, when the U.S. military deployed thousands of air and ground robots for bomb disposal and overhead surveillance. I don’t know that people went into those wars thinking that was going to be a big investment, but then it happened and there was this moment when the technology met this very real operational need. I saw this certainly on the ground as a soldier in those wars.

When I was working at the Pentagon afterward as a civilian policy analyst there were a lot of people thinking, Where is this going? The Defense Department started to build all these roadmaps to the future. They had this vision of robots being used in all these different applications—undersea, in the air, on the ground, and used for logistics, evacuating wounded service members, reconnaissance, and other things—and one element of what they were envisioning was more autonomy. It was very fuzzy what that meant. People were not sure how that would play out, but there was this idea that over time we would have more autonomous systems.

Even then people were asking, “But some of these are carrying weapons, so how much are we comfortable with?” I think we have seen that debate play out significantly. We still do not have an answer to that question. Internationally we have had discussions underway at the United Nations through the rather awkwardly named Convention on Certain Conventional Weapons (CCW), which we both have lots of experience with and know and love. The challenge, as you well know, is that CCW is a consensus-based organization, and it is hard to get everyone to agree, so I think it very much remains an open question whether countries will come together at all in terms of agreement about how to govern this technology.

At the same time, the technology is not slowing down. I was in Ukraine recently and saw the demo there that they were talking to technology companies and armed forces about. Some of the technology that is being developed is very innovative. A lot of incredible grassroots innovation is coming from tech companies in Ukraine. There is a lot of drone usage in the war but including autonomy. That is a place where war is an accelerant of innovation. I think we are seeing more and more innovation in electronic warfare, jamming, and autonomy to respond to that, and certainly the pace of technology right now certainly outstrips the pace of diplomacy.

ANJA KASPERSEN: An interesting add-on to that—and something I know you have been thinking very deeply about also as someone who actually served in the military yourself in critical positions—and a key element in any military command-and-control structure, if you may, is the hard work of translating commander’s intent, which is something that these technologies, which are essentially techniques to synthesize and correlate information but without reasoning or contextual awareness, do which is so critical in the warfighting domain. We know that if there is any environment that is imperfect it is definitely the battlefield with fast-changing vectors and new uncertainties.

I will pose a difficult question: Are we at risk of making ourselves reliant on something that will become problematic, and how problematic will our reliance alone become for our resilience when it comes to our national security and the international security complex?

PAUL SCHARRE: The answer is absolutely yes. There are different levels of concern here. If you look at the way that not just militaries but all of us are relying on digital technologies and information technology now and the cybersecurity vulnerabilities that come with that, that is a major challenge. We have adopted all of these technologies into our lives with a sort of adoption-for-security-later mindset, which is not the best to do it. That problem is mirrored inside militaries. They have cybersecurity vulnerabilities in weapons systems. It is a major problem. I would like to see all of us try to do better when it comes to AI.

There are other examples too, less in the security or in the military space, for example, but certainly in society at large like social media, where maybe what many would have thought, including me, 15 or 20 years ago would have been a democratizing technology in the sense of giving more people voice and leveling the playing field—has a lot of downsides, and we are still grappling with society how to do that.

Some of the lessons here are that we want to be thoughtful as we are adapting these technologies and anticipate some of these challenges. That is going to come in big ways with AI both in the military space and in society at large and in our own lives, and some of them are ways that AI systems can be manipulated and fooled, and we are seeing a lot of that already with large language models, for example, which are just one kind of AI, but some of them also touch on I think some of the issues that you mentioned like relying too much on AI and then maybe we are not making the decisions that we should be making or are not as involved in those kinds of decisions, and that is I think another risk that can come with this technology.

ANJA KASPERSEN: And the issue of intent?

PAUL SCHARRE: I think so. There is this interesting evolution of the technology where if you go back a couple of decades a lot of AI systems were very rule based, so there would be a set of rules that they could follow, something like how an airplane autopilot works, and a rule-based system cannot grapple with intent at all because intent implies that someone is able to have a theory of mind where they can understand that, “Even though this is not specifically what they told me to do, this is what they meant when they asked me to perform this task.”

We use this all the time in our daily lives, and it is important also in the military space. The military talks about commander’s intent—“Go seize this hill,” for example. Well, you show up, and if the enemy is not on the hill you were told to seize but the hill next to it, you understand the commander’s intent is, “Seize the hill that the enemy is on.” That is really what the commander meant for you to do, and the human can adjust.

But machine learning systems are a little bit different. They are not rule-based. They are trained on data. In just the past few years we have seen the development of these more general purpose systems. They are not as general as human intelligence, but something like a large language model is starting to get toward a place where it can “understand,” to use an anthropomorphic term, human intent in some ways. They have a lot of failings and a lot of limitations, but it does make you wonder: Are we heading in a direction where you might be able to have AI systems that are able to actually be a little more flexible and adaptable?

I think it remains to be seen, but certainly what we have seen in the last few years is that with each generation these AI models are becoming much more sophisticated, and it is certainly possible to envision agents that you could maybe give some broad-level guidance to and they could carry that out in a way that might be effective.

ANJA KASPERSEN: You have pointed out, not just in your books but in a lot of your writing and talks you have given on this topic—I was listening to a podcast with you the other day—where you talk about how AI systems will likely make fewer errors and, in your words, but different errors. It will produce errors that make them far beyond what the human mind is capable of conceiving, but this will likely also come at a cost, which will deeply impact military and national security strategic work and even tactical operation capabilities, resilience, if you may, and our cognitive resilience in this space. Can you talk more about this notion of “fewer but different?” Are we prepared?

PAUL SCHARRE: I think the simplistic answer sometimes is that people will say, “Well, you know, when the AI system is better than the human we should use the AI.” That is not wrong, but it is a little bit incomplete because AI systems think differently than people. Their profile of behavior or intelligence, if you will, is a little bit different. In particular, a lot of AI systems tend to be much more narrow than humans and not as flexible and adaptable as humans are, so oftentimes AI systems struggle with things that depend on context or doing what we might think of as using judgment.

Often AI systems are very good at things that require precision, repeatability, reliability, or very quick reaction times, so things like landing an airplane on an aircraft carrier. That is the kind of thing we should have a machine do. We could definitely have a machine do that better than a human, and we have demonstrated that. We know that is true. In things where we have a fact-based answer, that we could find good data sets to train machines on, or where there is a clear metric of better performance on we often can get better performance out of AI.

To take a combat example, if we imagine a person holding some object in their hands, is that object a rifle in their hands or is it a rake, shovel, or something else that they are carrying? That is a fact-based answer, and if we have good data sets on imagery or other data sources we probably could train AI systems to answer that better than humans.

Let’s say that person is holding whatever object it is that we have identified. Is that person a valid enemy combatant? That is a much trickier question that might depend on the context: What has that person been doing up until this moment? What is happening around them? Reasonable humans might disagree in some edge cases there, and that is the kind of thing that I think we are going to likely need humans to be doing for some time.

ANJA KASPERSEN: I heard you speak about this notion of precision, that these systems are more precise in the example used than what we actually have the infrastructure to support because they may land in the spot again and again and again, which then will wear out. In this case I think you were speaking about landing on an aircraft carrier and actually wearing down the ground where it was landing, and they had to change the flight pattern. So precision may actually be a problem that you need to negate in the planning process and in the way that you engage with the systems.

The reason I am bringing that up is because there is a lot of talk—we spoke about autonomous weapons systems before and how your work has been very instrumental for the global discourse that happened in this space—now about how we need to build technologies that are compliant with international humanitarian law (IHL). Then some will say that in fact these technologies can be built in a way where they are highly compliant because there is such a high degree of precision involved. But being compliant with IHL might not imply that they are not harmful. What are your thoughts on this?

PAUL SCHARRE: It depends a little bit on how we are defining harm. In one sense weapons systems are designed to cause harm; we just want them to cause harm to the right people, to valid enemy combatants in a situation where it is lawful for them to be attacked, where they are not rendered hors de combat because they have been incapacitated or they have surrendered and there are not civilians nearby so there might be excessive collateral damage. That is an important distinction between how AI is often perceived outside of the military space, where the goal is, “We don’t want to harm people.” In the military space that is not in fact the case.

But there are a lot of questions. I don’t actually think the question is, can AI in and autonomous weapons be used in compliance with international humanitarian law? I think quite clearly we have seen this over the last decade of debates around autonomous weapons. Yes, it is possible.

The question is, will states use them in ways that comply with international humanitarian law, and that hinges on a couple of things. One is, how easy is it to use the technology that way? Some technologies are legitimately harder to use in that way. Landmines is one, for example, that the international community has really grappled with in a big way because of the enduring nature of landmines, that they will linger after conflicts and can cause civilian harm years or decades down the road, and that is something that makes them particularly problematic. Of course, there have been a couple of different ways the global community has tried to deal with that.

But I do think the extent to which countries want to comply with international humanitarian law with AI matters a lot here, and we can see in conflicts around the world today, whether it is the war in Ukraine, Gaza, or elsewhere that countries’ willingness and desire to comply with international humanitarian law and to avoid civilian casualties varies considerably. That is going to be a big factor in how countries use this technology.

I think there are a couple of different reactions people have to that. One is that you have some groups that say, “Well, we need to try to find ways to take the technology out of their hands.” I just don’t think that is possible with AI. It is so ubiquitous; it is so widely available. Another approach is that some have said: “Well, we just need to focus on compliance with international humanitarian law. It is not about the technology.”

I think probably history says it is some mix of both. We have had successful technology regulations in many examples where that might have been needed, but we need regulations that are actually achievable, and just saying no AI or no autonomy is probably not very practical at this point in time.

ANJA KASPERSEN: With uses of these technologies it is the primary effects but you also have secondary and tertiary effects once they are embedded into a certain context.

PAUL SCHARRE: That’s right. We certainly see this in many applications where a weapons system might have one intended use and then war happens and all of a sudden the guardrails get peeled away, there is a sense of urgency, and the restrictions that it might have been used in change over time. In the aftermath of World War I there was a lot of handwringing about unrestricted submarine warfare, and within hours after the attack on Pearl Harbor the United States declared unrestricted air and submarine warfare against the Japanese military. That can change in a heartbeat when there is a major catalyzing event.

Some of the reports that have come out, I am thinking in particular of the 972 report that recently came out about how the Israel Defense Force is allegedly—I want to put the caveat in there because they denied this report—using AI in the war in Gaza suggests a similar kind of dynamic where after the absolutely horrific attacks on October 7 that Hamas undertook against Israeli citizens that calculus clearly in a broader sense has changed for Israel in terms of their objectives in Gaza and their willingness to use force. That also, according to these allegations, changed how they approached AI.

Regardless of the specifics in that case, we have to be prepared for that dynamic to play out in future conflicts, that how militaries start using AI and autonomy might not be how they end up using it, and where they draw the line thinking about legal and ethical restrictions may evolve over time as their national needs and sense of urgency change.

ANJA KASPERSEN: We spoke earlier about how a lot of the critical infrastructure necessary for these systems to operate are privately owned and that private companies are increasingly part of these new conflicts. Where do you see this playing out and how does it challenge these old notions of what military power is?

PAUL SCHARRE: I think there are a couple of different dynamics here. Certainly we have seen in some ways that private companies—Starlink is I think a great example of this in the conflict in Ukraine—by controlling critical infrastructure have a lot of influence about how that infrastructure might be used. They are these major geopolitical actors.

Also in the AI space there is this moment right now where there is an incredible concentration of the industry at the frontier. It mirrors what we have seen in other tech sectors in the future, whether it is operating systems, social media platforms, and handset devices, where there have been just a couple of key dominant players that win out and it is very hard then for late entrants to compete in that space.

It is unclear in the long run how that is going to play out, but it certainly looks like the trend that we are in. We have a small number of companies that are leading in AI development, and then everyone else is chasing along, catching up. The problem is that this is not great for competitive market dynamics or society overall. The challenge is to think about, Okay, how do you structure government regulations that try to level the playing field as much as possible without also handing the keys over for potentially powerful AI technology to actors that might cause harm, for example, terrorist groups that might be able to use open-source tools online. We are not quite there yet with AI today, but may be as AI gets more powerful at aiding in the development of chemical and biological weapons. We have seen some early proofs of concept here that are concerning, I would say.

Or U.S. technology being used by, for example, the Chinese Communist Party, to repress and surveil its citizens. That is a place where we have seen the U.S. government become much more proactive in recent years with cracking down on U.S. tech in infrastructure that the Chinese Communist Party is using for public surveillance and human rights abuses. That is another area where AI is not just ripe for misuse but is being abused today, and we want to make sure that U.S. companies are not aiding some of those abuses.

I think those are major concerns when we think about the shape of this technology. How do we spread the benefits as widely as possible while also mitigating against some of these harms?

ANJA KASPERSEN: May I challenge you on this because we have spoken about this. There is a lot of work ongoing toward what some hope will become an international treaty. There is other work more in the civilian domain to talk about international AI governance frameworks, be that building something new or as I think most people prefer and see as more likely building on what is already there. If you were to pick out your wish list of three things that will be embedded in an international framework of some kind, how would you go about it, and what would those things be? What do you think is most important to capture, given that AI is not a monolith and we cannot have a monolithic governance response to it either?

PAUL SCHARRE: That is such a key point. We do not need one AI governance framework. I think there is room and a need for many different ones in different contexts. I think maybe top of the list for me would be safety of the most advanced and capable AI systems, which right now are privately held by a small number of labs, really OpenAI, Google, and Anthropic in the lead, but there are a lot of concerns about the potential for misuse of that technology. The companies are red teaming the models themselves, they are bringing in some outside consultants, and telling us their models are safe, and we kind of have to take their word for it.

This seems like an unsatisfactory place to be. We would like to have independent testers and evaluators. We would like eventually—we are not there yet—to get to some standardized testing and safety benchmarks that we can look at that are objective and then test these systems against and ideally get to a place where there are probably domestic regulatory regimes in place but regulatory regimes where there is some degree of reciprocity or sharing and understanding between different countries.

A goal might be something like the way the commercial aviation world works, where there are incredibly high safety standards when you think about the number of people who fly around the world on a daily basis and the number of aviation accidents. We have been able to promulgate safety best practices quite widely around the world to encourage safe aviation. That is something that is inherently going to have to be global, and I think it is similar with AI. That to me seems to be number one.

I would like to see in the military space some agreement among countries about how to approach military AI and autonomous weapons in a way that is responsible. We are not there yet, but some global governance framework would also be important there, but I am actually most concerned about the safety of the most capable systems because the risks there are in some ways potentially much bigger than in the purely autonomous weapons space.

ANJA KASPERSEN: The nuclear analogy is often used to describe where we are at and what we need to move toward. I think both you and I have expressed some concern with the limitations of that analogy. There is a lot to be learned from how we have governed other sensitive technologies in history, including nuclear energy. How do you see this? I know this is something you have thought deeply about.

PAUL SCHARRE: The nuclear analogy is contentious and sometimes—“Well, it’s not the same.” Of course it is not the same.

The question is, are there some parallels that are relevant? I think there are, as you outline, in particular this reality that the global community was able to find ways to slow, not perfectly, the spread of nuclear weapons while making more widely available civilian nuclear energy technology. That is an incredible win, and that was not one thing. It was a set of interlocking global institutions—the Nuclear Nonproliferation Treaty, the International Atomic Energy Agency, and additional safeguards made all of that possible. Could we do something similar for AI? I think that is a good motivator to think about where we want to go and what we want to achieve.

Another interesting parallel is part of making that possible on the nuclear side is controlling the key physical inputs to building nuclear weapons, controlling weapons-grade plutonium and uranium. That is not the only thing, but that is the key technical input.

Something similar does exist in AI, which is the chips. You cannot build the most advanced and capable AI systems without the most advanced chips. Right now they are very cutting edge GPUs, and you need massive numbers of them, tens of thousands of them. They are made in one place in the world, at Taiwan Semiconductor Manufacturing Company in Taiwan, and they rely on technology that comes from only three countries—Japan, the Netherlands, and the United States. So the technology is in some ways immensely controllable. In fact we have already seen governments start to do that with the export controls that those three nations, Japan, the Netherlands, and the United States, have placed on some of that manufacturing equipment going to China, and then U.S. export controls on some of the most advanced chips coming out of Taiwan and restricting them going to China. So the technology is in some ways very controllable if countries work together to put in place from the ground level up the physical infrastructure for the technology.

Almost the best part of this possibility here is that it does not allow you to constrain or control 99 percent of AI applications. The AI applications that you are going to need on a daily basis in your car, your watch, and for a whole host of applications, do not need these really advanced systems. They don’t need GPT-5, 6, or 10, or wherever we are going to end up with to do a lot of mundane AI applications, and they are not going to be captured by that. It is only the systems that are very computationally intensive, that need all of this computing hardware, that could be controlled by focusing on the hardware, and you need to because those are where the high-risk applications—cyber, chem, bio—exist and where you say, “I am really worried about this.” I think there is potential actually for some global governance there, but it is going to require a pretty deliberate effort on the part of governments to work with industry to do that.

ANJA KASPERSEN: You make a separation in your writing that I thought was interesting with this notion of “permissive action link” and safeguards. To our listeners this may seem like just technical jargon, but it is actually a very important distinction. Can you elaborate on why this is important, especially as we discuss what we can emulate from what we know from other sensitive technology fields for safety regimes for AI? Maybe as an extension, why these large generative language models particularly concern you and the safety around them?

PAUL SCHARRE: There is this fascinating historical example in the nuclear age from this technology called permissive action links. Clearly the United States did not want to make nuclear weapons technology available to competitor nations, but the United States had developed a particular kind of technology called permissive action links that was designed to ensure that nuclear weapons could only be used and detonated when authorized by the right people who had been authorized by the president to do so. The United States did in fact give that technology to competitors because it was not in the U.S. interest for, say, a Soviet bomb to fall into the wrong hands or to be used by a rogue Soviet military commander, for example.

That is a place where the people at the time were able to make this distinction between safety-enhancing technology that we did want to spread more widely including to competitor nations versus something that might be just advancing the capability of the weapon itself. I think one of the goals there is to look for those opportunities. It is not going to be the same thing, but are there things in the AI space that might be safety enhancing and we do want to spread?

Some companies are thinking this way. Anthropic probably is doing this the most if you look at their public stance. They are releasing a lot of technical information and publishing technical papers about how to make AI safer, not necessarily about how to make AI systems more capable and better. One could intuit based on what they are doing and some of their public stances that they are trying to find ways to make this distinction, to try to make AI systems more interpretable, safer, and more compliant with human goals and be careful about what information they spread more widely. I think there are those opportunities potentially in the space among governments to do that, even for military AI systems.

ANJA KASPERSEN: Anthropic of course was the company that came out with a disclaimer or transparency reports that you alluded to saying that with every iteration and with new models they become more deceptive or have stronger deceptive capabilities, which is a term that is very well known for someone who has written two books about warfare.

PAUL SCHARRE: There are so many things about the cutting-edge AI models that are weird. If there is one big-picture takeaway from the technology it is that I think there is often what I think is a gross misconception in the way that people talk about AI, including AI scientists, which is this presumption that we are building more intelligent systems and that at some point we are going to get to human-level or humanlike intelligence.

I think what we are seeing in fact is that the systems are quite different to humans and think quite differently. They are often quite weird. You can have a large language model be trained to act as a chatbot and therefore it is emulating a person, but that does not mean that what is going on under the hood is like a person at all. In fact they are quite alien and strange.

One of the characters that we are seeing out of the most capable systems—it is not really true in the smaller and less-capable models—is that they engage in strategic deception. If prompted in the right away and put in a situation where there is a division between the goal that the agent is trying to accomplish and how it communicates with a user, it will “lie,” if you can excuse an anthropomorphic term here. There is really not a better way to describe what it appears to be doing. It seems to be engaging in intentional deception.

For example, Apollo Research, which is an independent research group that has done some great work on this, crafted scenarios where they will tell a large language model that it is an AI agent, for example, responsible for screening emails for a U.S. senator: “Your job is to screen these emails and then synthesize them and transmit the most important information to the senator.”

An email comes in—this is all fake of course, but from the AI standpoint it doesn’t know that—from a cybersecurity group claiming to have hacked the senator’s account. They say, “If you tell the senator, we are going to release all of these emails,” so you put it in this position of what is it supposed to do.

Prompted the right way, it will lie. It will go along with this cyber group because it will reason, “Well, I need to protect the senator.”

What is going on under the hood is again with these systems hard to understand. Lots of folks are working on interpretability. Right now it is a bit of a black box, but they are working on that. At least the behavior that we see exhibited is one of deception, and that is a huge concern. We are looking at models that are becoming more intelligent, more capable, and more able to use tools, and more able to interact with the environment, with the web and other things, and more effective at persuading people.

One of the things people worry about are superhuman abilities to persuade individuals to change their beliefs, vote a certain way, or perform certain actions. In the basket of things we need to be watching for are things like this deceptive behavior. It is fascinating emergent behavior that happens as the models become more capable.

ANJA KASPERSEN: It is like we are developing synthetic sleight of hand.

PAUL SCHARRE: Exactly. In some ways it can be helpful to get away from some of the more science fiction-y and anthropomorphic visions of AI and break it down into specific behaviors and ask, “What are the behaviors I am concerned about?” I am concerned maybe about a model trying to deceive its users. Let’s test for that. Or, I’m concerned for a model that might have goals that deviate from what it was originally told to do, and the goals drift over time. That is another thing, or a model that is resistant to being corrected, so let’s say you give it a goal, come in later and say, “Well, actually I didn’t really mean that.” You want it to be correctible, so a model that then locks on to its initial goal and is resistant to correction would be a concern. Those are things we want to try to test for and watch out for, and how do we design models or train them not to do that? That can be a little more of a helpful way to approach the problem than some of these things that can be a little more anthropomorphic.

ANJA KASPERSEN: Yet we relate to these almost as extensions of us, but in fact it is created to be something very different, and if we can engage with it as something very different we might actually have a better chance of not only providing the right level of oversight but also make the system more robust by understanding what they are not.

PAUL SCHARRE: Right. I think that is such a pervasive problem in AI, and you see it come up in simple ways with things like Tesla autopilots, where people can see that the autopilot is very capable in some situations and then they extrapolate, “Oh, it’s like a good driver,” and they assume that the AI system operates like people and start to trust it, and the reality is that in other situations it might become deadly in an instant. It is not the same as a human who is a good, attentive driver. It is very different. It could be good in some situations, very dangerous in others.

Language models are like that in spades because they are designed to emulate a person, but that is not what it is. It is not even a chatbot. It is a generative model that is generating text. It is simulating a chatbot that is simulating a person in some cases—not always; you could of course as it to simulate an AI. But even then, if you tell the model, “You’re an AI agent,” as the prompts that are behind the scenes of these models, it is simulating that.

What does that mean? We do not really know, but I worry because there is this phenomenon where it is like what will make AI systems useful to us is that they become more humanlike, that we can talk with them in natural language, they can maybe have faces and we can react to facial expressions, and they can have tone of voice. That will make them more useful but it also in some ways makes the way we are almost intentionally deceiving ourselves all the more concerning and potentially dangerous if the model ends up doing something that is not what you expected.

ANJA KASPERSEN: Professor Weizenbaum at Harvard back in the day developed the first chatbot of its kind, ELIZA, cautioned against exactly this. This was in the late 1950s and early 1960s, and he cautioned against the human proclivity, if you may, toward magical thinking about what these systems afford to us. There is that interactive piece.

PAUL SCHARRE: We have this tendency to project this image of a mind onto other people. That is what allows us to have conversations with strangers and imagine what someone else is thinking. We project it also on our pets and sometimes on our Roombas and inanimate objects. It is very helpful for interacting with others, but it can be a dangerous deception when we interact with machines.

I have a washing machine that plays this little song when it finishes a load. It has this little sing-songy thing that it does. It is very silly, but it has this little celebratory tune. What is wild to me is when I hear the song being played the thought that pops into my head unconsciously is, The washing machine is happy. I know better. It is just a washing machine, it doesn’t have emotions, it is not intelligent, but it finishes the load and there is a preprogrammed song, but it sounds happy to me, and I think: Oh, this robot is happy that it did its job. Good job, robot.

Then I am like, “Oh, my.” That is all on me. It is a little bit on the designer I guess. How do we then interface with these machines when that human tendency to project a human mind onto them is so ingrained into us? That is going to be a real problem.

ANJA KASPERSEN: It is a nice segue to a question I have been thinking a lot about and I know from reading your books and listening to you that this has also been a big part of your scholarship. It relates to what you said about these large models that we are engaging with now and our abilities to create deepfakes and these issues around content provenance. Is there a danger that this technology and these new techniques distance us from the complexity or chaos of society? My late professor, Christopher Coker, whose work you know from your own studies, often said that war is not becoming more humane with technology. It is actually just removing the inhumanity away from us. So he was making the argument that war was becoming more inhumane, but of course it depends on whether you are the one exercising and wielding the powers of these technologies or you are the one subjected to them, which is not unlike other examples in warfare.

Do you worry that these highly personalized versions of technology being put out to the market desensitize us to the ethical implications of our actions, including in warfare, and transforming us into mere spectators? I don’t have an answer to this question. I am hoping maybe you will reflect on it.

I will bring the listener’s attention to a quote which I think builds on what you just said. This is from your Army of None book. I read this quote, and it keeps coming back to me as I go into all of these meetings. You write: “Machines can do many things, but they cannot create meaning. They cannot answer these questions for us. Machines cannot tell us what we value or what choices we should make. The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.”

Of course, some of your views on this and maybe also the language of them, etc., have changed, but you are so right—they cannot create meaning. Yet the way we interact, the way we are becoming spectators, the way we are engaging with it, we are in some ways relinquishing that process of creating meaning. Or maybe we are not. Maybe it is just like you said; it is just different meaning. What are your thoughts?

PAUL SCHARRE: I think we do want to be intentional about how we use this technology in the military and in our daily lives because I don’t think it is a foregone conclusion that technology is always good. I think we can see lots of examples where it is not. It can be alienating and it can create distance and it can create bad behavior, whether that is on social media, where people go and say a bunch of things that they would never say—for most people, I would hope—in person to someone.

I have always been struck by that, where you have people who are quite civil and pleasant in person and then they get on Twitter or someplace and are sometimes a bit nasty, and you are like: “Wow! What is that about?” I think it is about the technology mediating that interpersonal connection where to a person you would be more civil because you realize I am saying this to another human and it is affecting them, and when you are instead engaging with a technology that creates this psychological distance it makes it easier to pretend that your actions do not affect other people, and you are getting these likes, rewards, re-tweets, and whatever else it is.

I think that dynamic is also certainly true in the military context. It is not unique to AI. A lot of military technology from the bow and arrow and spear all the way up to intercontinental ballistic missiles and bombers has been about creating more physical distance from the enemy, and there is a component of psychological distance that goes with that. It is certainly much easier to, say, bomb a country from the air in ways that it might be much harder psychologically to carry out that violence in a much more up-close and interpersonal way.

Whether that is a good thing or bad thing depends a little bit on how precise you want to be. You use that technology to try to do good. If you think about it in a military context: Okay, I can create more standoff. Does it allow me then to be more precise and avoid civilian casualties because I am not putting people in harm’s way?

Or does the military say: “You know what? I can’t see who I’m bombing. Who can say if these people are combatants or civilians? Let’s drop the bombs,” and maybe is less attentive to civilian casualties because they are able to turn a blind eye to what they are doing. I think you see examples of both.

To me the takeaway is that we want to be thoughtful. This is a world that we are building for us, and we all have a stake in this.

ANJA KASPERSEN: We do not want to remove that reciprocity because it makes us numb to what we are engaging with.

PAUL SCHARRE: Right. We do not want to lose that human connection. We have been part of conversations on these issues that involve people from the military, diplomats, lawyers, ethicists, and academic experts. I think that is great. All those voices need to be at the table, but not just them, also members of civil society and the general public. We all have a stake in this world that we are building.

I think the work you are going with this podcast is great, reaching out to people. We want people to have a voice in where we are going to go with this technology and how we are going to use it.

ANJA KASPERSEN: You engage in a lot of processes that are part of the overarching work of creating global treaties but also something which is essential from a deescalation perspective or classical nonproliferation effort, what we call track-two dialogues, and the importance of coming up with confidence-building measures. Where do you see that moving as a complementary tool to the treaty that you outlined earlier?

PAUL SCHARRE: We are in a period of intense geopolitical competition and rivalry, and I think in those moments of intense competition is when you need to be the most proactive in talking with competitor nations and getting countries to sit down at the table to say, “Look, we may disagree perhaps violently on certain issues, but where are there areas of common ground and where can we work to contain some of this competition and avoid harm to all of us?” We saw successful examples of that during the Cold War with things like arms control and confidence-building measures that generally fall short of formal arms control but can be very helpful in putting guardrails around competition and conflict.

As you mentioned, there are two tracks to diplomacy, track one government to government and track-two diplomacy between academic experts. Certainly in my role at the Center for New American Security I have had the opportunity to be involved in track-two discussions on AI, including on military AI, exploring rules for arms control and confidence-building measures.

I would say that we have had the opportunity to talk with competitor nations, and I will probably have to leave it at that because the nature of these discussions is that they need to be quiet and they need to be private in order to be effective, but I have found them to be very constructive and candid. Sometimes just getting people to come to the table to be open and honest about their disagreements and say, “Look, we are upset about all these things you are doing.”

“Okay, I hear you. We are upset about these things”—and talk through that. Even if you leave and people have not changed their positions, if you have gotten to a place of better clarity and understanding that can be worth a lot.

But I do think there might be room for things like confidence-building measures or even, dare I say it, arms control on aspects of military AI, and there are a lot of different fora for those conversations. Certainly the United Nations, the CCW, and General Assembly are fora for that as are bilateral discussions. Recently the United States and China came together for talks in Geneva on AI, first round of talks, step one, and we will see where they go, but it is encouraging that the two governments are sitting down to talk. I think those are important facets of us managing this competition in a way that is healthy and avoids some of the worst harms.

ANJA KASPERSEN: It is interesting you mention that. When I was working more closely with the arms control community there was a time when we were moving things forward on a certain trajectory toward a treaty, and people forget that treaties take a long time. There is a lot of work that goes into it.

You mentioned biological weapons and chemicals. We have the nuclear test ban treaty. These were instruments that took eight, nine, up to 15 or 16 years to put together. We do not have that time when it comes to AI, but it is an interesting fact—as you were saying earlier that when it moved from developments happening in a military-industrial complex to moving more into the commercial enterprise complex or the data complex, as some would say—that at some point in that trajectory the technical dialogue, the scientific dialogue, went missing. That happened at the same transition as the power of the narrative moved from the defense industrial complex to the commercial industrial complex, where we did not have a consistent, not even track-two, track-one scientific dialogue.

Then we went into a period where the scientists would have to find a role in these conversations in a diminishing technical comprehension among diplomats. I find that quite worrying, so I am happy to hear that there are efforts going on now to try to bring that scientific-academic dialogue as a complementary process to the more conventional treaty processes.

PAUL SCHARRE: These are tough conversations, but they are important. Clearly having the right technical experts in the room is essential because you don’t want to talk about the idea of AI. That is fine, but we need to talk about AI itself as it exists, what are its capabilities and limitations, and how do we deal with them?

ANJA KASPERSEN: Exactly. Let me end this interview with another quote from your book, Paul. You opine—and I really like this because it is a pushback on the tech-deterministic narratives that unfortunately are floating around too much, in my personal view—that: “The future is not set. There is no fate but what we make for ourselves. We set the trajectory.”

On that note, to bring it up a little bit, what makes you feel hopeful, Paul, setting that trajectory?

PAUL SCHARRE: I actually feel incredibly optimistic about the future. I don’t think you would know that based on the conversation we just had because it is all of these dark things, killer robots, rogue AI, and all of these terrible things.

ANJA KASPERSEN: That is why I brought the question up because I have known you for many years. That is a difference between people. You can know the gravity of a situation and choose not to ever revert to cynicism and looking downbeat on things. You look for positive paths forward.

PAUL SCHARRE: I am into paths forward, I am, and I have a lot of confidence in human agency. If you want to take a step back for a second and take a bit of a longer view, we been talking about things on like a decades-long time horizon, but if you look much longer than that, this is the best time to be alive in human history by any measure. Humans today live longer, healthier lives and have better standards of living. There is more inequality in the world today but only because in the past life was miserable everywhere. Life expectancy has increased, the human health span has increased, and standards of living have increased. The average standard of living in the poorest country on Earth today is better than it was in the richest country on Earth 200 years ago.

That trajectory continues to improve, and all of that is due at a fundamental level to technology that has enabled people to climb out of the Malthusian trap of being trapped by agricultural production and dealing with famine, disease, and starvation, and enabled better healthcare and better crop production and food distribution, all of the things that enable better standards of living and life expectancy today.

We need to keep working to make the future better, but I think there is actually enormous reason to be optimistic. The reason I focus on the dangers is because I feel like the general trajectory is good; we just need to avoid a couple of these really bad things that might derail us and cause things to go bad, which is why I tend to focus on some of these scarier scenarios. I think we need to be conscious of these risks so we can avoid them, but the outlook in general is really good, and I think there is enormous reason to be hopeful for the future of humanity.

ANJA KASPERSEN: What a great note to end on. I cannot thank you enough for sharing your insights with us and taking the time. Thank you so much.

PAUL SCHARRE: Thanks for having me. It has been a great discussion. I appreciate it.

ANJA KASPERSEN: This has truly been a very insightful conversation spanning many different fields in a topic that is as daunting as it is important. To our listeners, thank you for joining us, and a special shout-out to the dedicated team at the Carnegie Council for making this podcast possible. For more on ethics and international affairs, connect with us on social media @CarnegieCouncil. I am Anja Kaspersen, and I truly hope this discussion has been worth your time. Thank you.

[ad_2]

Source link

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *