[ad_1]
Moving from Principles to Practice
While the Blueprint may not be groundbreaking, there are signs we are moving in the right direction. There was an emerging consensus at REAIM that policy debates have for too long revolved around lethal autonomous weapons systems (LAWS) as opposed to broader AI applications, while narrowly focusing on tactical issues at the expense of strategic considerations. Moreover, participants agreed that these debates have been held at a level that is too general and abstract, thereby impeding effective governance.
But that is gradually changing. Several sessions at REAIM emphasized the need to move away from the LAWS debate, focus on strategic security, and embrace rigorous testing and analysis of AI use in concrete case studies. Throughout, there was broad agreement on the need to be more forensic about research questions and more precise in identifying knowledge gaps. As RAND Europe’s James Black put it, “We are still in the problem-finding, not problem-solving, phase.”
How can we move toward the problem-solving phase? That was the focus of the REAIM panel, “Responsible Military Use of AI: Bridging the Gap Between Principles and Practice,” co-hosted by Carnegie Council for Ethics in International Affairs and the Oxford Institute for Ethics, Law and Armed Conflict (ELAC). Drawing together leading ethicists, international lawyers, and military strategists, I asked panelists how decision-makers can more effectively translate principles for AI governance into concrete global action. The panel was the beginning of a series of planned workshops that aim to address this urgent question.
Professor Kenneth Payne of King’s College London captured the sentiment in the room: “Talk about norms can easily pick off the low-hanging fruit, like noting concern and the need for greater trust—but it’s hard to progress far beyond that.” In an era of increased geopolitical competition, AI poses significant risks at the strategic level, including by rendering conflict escalation dynamics more opaque and unpredictable. “The ‘grammar’ of war, to borrow from Clausewitz, is uncertain,” Payne said.
Amidst that uncertainty, being a first mover in shaping the rules of the road offers distinct strategic and normative advantages. Dr. Paul Lyons, senior director for defense at the Special Competitive Studies Project and a former U.S. Navy captain, stressed that “while governance is important, while responsible speed is important, let’s not forget that there is an innovation race, and the first mover will assign those values.” Responsible AI, according to Lyons, requires rapid experimentation, where operators test different governance models in a controlled environment.
As states race to develop and deploy new technologies, transparency will play a critical role in ensuring responsible AI use. “Implementing responsible AI principles can create efficiency, effectiveness, and legitimacy,” emphasized Dr. Tess Bridgeman, co-editor-in-chief at Just Security and former U.S. National Security Council deputy legal adviser. “Transparency is an enabler of trust in the AI systems that we’re producing and operating, and trust in turn can foster faster adoption of those capabilities.” Lyons agreed, observing that “trust is inherent in who we are, what we do, the visions we follow, and the values that serve us, as we prescribe how AI should be used.”
There are several steps states could take today to improve transparency on military AI. First, states could publish their domestic national strategies, policy documents, frameworks, and legislation for the development and use of AI in the military domain. Crucially, this should involve redacting and declassifying policies and procedures for the use of AI in national security contexts, including in decision-making pertaining to the use of force and intelligence operations that support combat functions. Second, states could create compendiums of existing knowledge on military AI. Such compendiums might usefully include case studies on the responsible use of AI-enabled technologies in conventional and unconventional conflicts. Finally, states should develop collective interpretations of how international law applies in specific AI use cases in ways that are accessible to public debate, while sharing best practices for legal reviews.
Alongside transparency, knowledge transfers and capacity-building will be essential for making military AI governance more robust. While policy debates largely have focused on regulation, lessons learned from drone warfare suggest that non-binding instruments—including policy guidance, rules of engagement, and codes of conduct—may be more effective in practice in ensuring that ethical guardrails are in place. In short, military AI governance may lie more at the level of policy than law, with training being a fundamental component of implementation.
Above all, effective AI governance requires political will. Professor Toni Erskine of the Australian National University stressed the need to develop “supplementary moral responsibilities of restraint,” that is, responsibilities to create the conditions within which we can discharge the duties of restraint that are already broadly endorsed in international relations in the UN Charter and international humanitarian law. “We face the dual task of establishing a consensus on new applications of existing hard-won international norms, and also generating agreement on new principles that would contribute to emerging norms.” Given the risks to humanity, Erskine concludes that a “coalition of the obligated,” rather than of the willing, is needed now. Delivering on this task will require strengthening transparency and confidence-building measures, drawing on lessons from the cyber and nuclear domains.
[ad_2]
Source link