State Department Issues Declaration on Military Use of AI and Autonomy

The X-62A Variable Stability In-Flight Simulator Test Aircraft, or VISTA, flies over Palmdale, California, Aug. 26, 2022. A joint Department of Defense team executed 12 artificial intelligence flight tests in which AI agents piloted the X-62A VISTA to perform advanced fighter maneuvers at Edwards Air Force Base, Dec. 1-16, 2022. Photo courtesy U.S. Air Force/Kyle Brasier.

The U.S. State Department issued a declaration on Feb. 16 staking out its positions on how artificial intelligence should be used by militaries around the world.

“Military use of AI can and should be ethical, responsible, and enhance international security,” the declaration says. “Use of AI in armed conflict must be in accord with applicable international humanitarian law, including its fundamental principles.”

The military use of AI should be accountable, it says, and should be used “within a responsible human chain of command and control.”

The framework was unveiled at the first Summit on Responsible AI in the Military Domain in The Hague, Netherlands, held Feb. 15 and 16.

“States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous systems,” it says, and outlined a series of nonbinding guidelines:

• States should take effective steps, such as legal reviews, to ensure that their military AI capabilities will only be used consistent with their respective obligations under international law, in particular international humanitarian law.
• States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.
• States should ensure that senior officials oversee the development and deployment of all military AI capabilities with high-consequence applications, including, but not limited to, weapon systems.
• States should adopt, publish, and implement principles for the responsible design, development, deployment, and use of AI capabilities by their military organizations.
• States should ensure that relevant personnel exercise appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.
• States should ensure that deliberate steps are taken to minimize unintended bias in military AI capabilities.
• States should ensure that military AI capabilities are developed with auditable methodologies, data sources, design procedures, and documentation.
• States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those capabilities and can make context-informed judgments on their use.
• States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.
• States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles. Self-learning or continuously updating military AI capabilities should also be subject to a monitoring process to ensure that critical safety features have not been degraded.
• States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. States should also implement other appropriate safeguards to mitigate risks of serious failures. These safeguards may be drawn from those designed for all military systems as well as those for AI capabilities not intended for military use.
• States should pursue continued discussions on how military AI capabilities are developed, deployed, and used in a responsible manner, to promote the effective implementation of these practices, and the establishment of other practices which the endorsing States find appropriate. These discussions should include consideration of how to implement these practices in the context of their exports of military AI capabilities.

The State Department said states that endorse these principles will publicly commit to them, use them when developing or deploying AI capabilities and promote these practices to the rest of the international community.

The department defines AI as the ability of machines to perform tasks that would otherwise require human intelligence, such as recognizing patterns, drawing conclusions and making predictions, and defines autonomy as a system operating without further human intervention after activation.