The Pentagon’s marching forward with AI weapons of war… responsibly.
This week, the Department of Defence released a lengthy 47-page document outlining the military’s plan to implement its responsible artificial intelligence principles, which basically seeks to integrate AI in the military without turning the world into a Terminator-esque hellscape. Though the DoD first outlined its ethical AI goals in 2020, this week’s Responsible Pathway to AI Development and Acceleration document details systematic ways the department plans to realise those claims and elevate them beyond mere wishful thinking.
In the document, Deputy Secretary of Defence Kathleen Hicks defended the military’s pursuit of AI technology, claiming U.S. adversaries have increased their AI investment in ways that “threaten global security, peace, and stability.” The Pentagon wants to respond to that “threat” by ramping up investment at home.
“To maintain our military advantage in a digitally competitive world, the United States Department of Defence (DoD) must embrace AI technologies to keep pace with these evolving threats,” Hicks writes. “Harnessing new technology in lawful, ethical, responsible, and accountable ways is core to our ethos.”
The document provides a timeline of the Pentagon’s evolving attitudes toward ethical AI, noting the department has “matured its ethics framework to account for Al’s unique characteristics and the potential for unintended consequences.” Researchers outside of the military have shown AI systems can reinforce cultural biases and discriminate against people of colour.
In general, these new guidelines, which add measurable goals for each of the DoD’s six foundational responsible AI tenets, are intended to “earn the trust,” of service members and the general public. The document defines this trust as the “desired end state” that will allow it to continue pushing forward with new AI technology.
“Trust in DoD Al will enable the Department to modernise its warfighting capability across a range of combat and non-combat applications, taking into account the needs of those internal and external to the DoD,” the document reads, “Without trust, warfighters and leaders will not employ Al effectively and the American people will not support the continued use and adoption of such technology.” The DoD said it also wants to apply that emphasis of trust to its interaction with other nations as well and says it wants to set “new international norms for AI usage.”
To that end, a recent Morning Consult poll of U.S. adults shows a mixed bag of opinions towards the military’s AI standing. Around a quarter (26%) of adults said they thought the U.S. military was more advanced than China on AI compared to a slightly larger 29% who think the U.S. is less advanced.
Gizmodo spoke with Hicks about the Pentagon’s philosophy toward AI integration during a March DoD trip to California. There, she stressed the importance of maintaining a “human in the loop” approach where a human operator still maintains a final role in deciding when an AI system carries out their objective. Hicks added that any military application of AI would have to align with U.S. values, a phrase both tacitly reassuring but ultimately ambiguous.
“It is imperative that we establish a trusted ecosystem that not only enhances our military capabilities but also builds confidence with end-users, warfighters, the American public, and international partners,” Hicks said in a statement to Gizmodo Thursday. “The Pathway affirms the Department’s commitment to acting as a responsible AI-enabled organisation.”
The DOD’s AI pathway report comes amid a time of apparent tension within the Pentagon’s inner ranks. Nicolas Chaillan, the Pentagon’s first chief software officer, resigned in dramatic fashion after three years in October, in part over what he saw viewed as the U.S. failure to keep up with China. In an interview with the Financial Times following his resignation, Chaillan said there was a “good reason to be angry” with the Pentagon’s tech capabilities.
“We have no competing fighting chance against China in 15 to 20 years,” Chaillan said. “Right now, it’s already a done deal; it is already over in my opinion.”
The Pentagon has since replaced Chaillan with former Lyft machine learning head Craig Martell who has joined the department as its Chief Digital and Artificial Intelligence Officer. Martell will reportedly play a key role in the Pentagon’s AI strategy moving forward.
Human Rights Groups Uneasy Over Military AI
Though it’s increasingly seen as an inevitability, military use of AI remains hotly debated. While activist groups like Human Rights Watch and Amnesty International have called for broad bans of AI-enabled autonomous weaponry, the military establishment and former tech industry heavyweights have regularly relied on the, “what about China argument,” to push for deeper AI integration between commercial AI firms and the military.
“Allowing machines to make life-or-death decisions is an assault on human dignity, and will likely result in devastating violations of the laws of war and human rights,” Amnesty International’s Senior Advisor on Military, Security and Policing said in a statement. “It will also intensify the digital dehumanisation of society, reducing people to data points to be processed. We need a robust, legally binding international treaty to stop the proliferation of killer robots.”
One of the loudest voices calling for deeper military AI use comes from former Google CEO Eric Schmidt who in 2019 was tasked by then-President Trump to co-head the National Security Commission on AI, an organisation whose goal is to produce lengthy reports for the President and Congress detailing methods and strategies for advancing AI in national defence. In his report Schmidt spoke critically of what he sees a certain ethical red tape surrounding military AI use and expressed concern that “authoritarian states” like China “will not be constrained by the same rigorous testing and ethical code that guide the U.S. military.”
So far, the U.S. hasn’t shown any interest in slowing down its AI expansion. During a U.N. meeting in Geneva late last year, the U.S. joined the likes of Russia, China, and India as one of just a handful of countries that oppose legally binding instruments to limit autonomous weapons development.
“The lack of a substantive outcome at the UN review conference is a wholly inadequate response to the concerns raised by killer robots,” Human Rights Watch Arms Director Steve Goose, said in a statement following the Geneva meeting. “The failure of the current diplomatic talks to recommend a path forward on killer robots shows that countries need to pursue a different avenue to prohibit these weapons systems. The world can’t wait.”
On the flip side, at least 30 countries have already reportedly voiced support for banning autonomous weapons systems. Those calls for autonomous weapons bans have even gained the support of UN Secretary-General António Guterres who last year released a statement saying synch systems should be prohibited under international law.
“Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law,” Guterres said in a 2019 statement.