The United States government is leading global efforts to build strong norms that will promote the responsible military use of artificial intelligence and autonomous systems. Last week, the State Department announced that 47 states have now endorsed the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” that the government first launched at The Hauge on Feb. 16.
AI refers to the ability of machines to perform tasks that would otherwise require human intelligence, such as recognizing patterns, learning from experience, drawing conclusions, making predictions or generating recommendations.
Military AI capabilities includes not only weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, and systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data.
“The United States has been a global leader in responsible military use of AI and autonomy, with the Department of Defense championing ethical AI principles and policies on autonomy in weapon systems for over a decade. The political declaration builds on these efforts. It advances international norms on responsible military use of AI and autonomy, provides a basis for building common understanding, and creates a community for all states to exchange best practices,” said Sasha Baker, under secretary of defense for policy.
The Defense Department has led the world through publishing a series of policies on military AI and autonomy, most recently the Data, Analytics, and AI Adoption Strategy released on November 2.
The declaration consists of a series of non-legally binding guidelines describing best practices for responsible military use of AI. These guidelines include ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, have the ability to detect and avoid unintended behaviors, and that high-consequence applications undergo senior-level review.
As the State Department’s press release on November 13 states: “This groundbreaking initiative contains 10 concrete measures to guide the responsible development and use of military applications of AI and autonomy. The declaration and the measures it outlines, are an important step in building an international framework of responsibility to allow states to harness the benefits of AI while mitigating the risks. The U.S. is committed to working together with other endorsing states to build on this important development.”
The 10 measures are:
- States should ensure their military organizations adopt and implement these principles for the responsible development, deployment, and use of AI capabilities.
- States should take appropriate steps, such as legal reviews, to ensure that their military AI capabilities will be used consistent with their respective obligations under international law, in particular international humanitarian law. States should also consider how to use military AI capabilities to enhance their implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict.
- States should ensure that senior officials effectively and appropriately oversee the development and deployment of military AI capabilities with high-consequence applications, including, but not limited to, such weapon systems.
- States should take proactive steps to minimize unintended bias in military AI capabilities.
- States should ensure that relevant personnel exercise appropriate care in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.
- States should ensure that military AI capabilities are developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel.
- States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those systems in order to make appropriate context-informed judgments on the use of those systems and to mitigate the risk of automation bias.
- States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.
- States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles. For self-learning or continuously updating military AI capabilities, States should ensure that critical safety features have not been degraded, through processes such as monitoring.
- States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example by disengaging or deactivating deployed systems, when such systems demonstrate unintended behavior.
Source: U.S. Department Of Defense