Military AI Risks and Opportunities Series – 1 of 4
The Citizen’s View of “Meaningful Human Control”
Humans have trust issues when it comes to Artificial Intelligence. Lone Star and others have conducted polling research for several years, and some of the results are surprising.
While consumers and legislators debate the use (and limitations of use) of AI in things like credit scores and self-driving cars, militaries around the world debate how to use AI in war, and here, too, limitations are being debated.
To start this series of articles exploring military applications of AI, we present survey results exploring citizens’ views of AI for computer control of a conflict and how the views of citizens of various democracies differ.
Citizen/Consumer Perspectives on the Role and Limits of AI
To understand what citizens of a democracy will expect (or tolerate) from their military, we should first consider how they view the forms of AI they experience in their daily lives.
A recent survey by AAA found 91% expressed fear or uncertainty about fully self-driving vehicles. Only 9% said they would “trust” self-driving cars. More importantly, the ratio of those who say they “fear” them has increased while those who “trust” them has gone down.
In contrast, most American drivers surveyed said they favored the use of emergency automatic braking and lane-keeping assistance. So, American drivers favor AI, which “assist drivers,” but not AI, which “replaces drivers.” The AAA (and other) survey results are consistent. Americans and the citizens of some other democracies have deeply rooted skepticism of Artificial Intelligence.
This wariness is changing slowly. But contrary to the hopes of Silicon Valley, US consumers are not gradually accepting AI. They are gradually adapting to resist it. Lone Star polling indicates that many respondents in Western nations actively protect their privacy through deception.
Consumers may not think of their actions as “deception,” but they widely report using false or throwaway email addresses, pseudonyms, false personal details (physical addresses, phone numbers, birthdays), use of technology (privacy mode, VPN…), and other means to deny or distract the AI used by retailers and advertisers. In fact, respondents often used several of these “self-protections.”
So, consumer-citizen voters are skeptical of turning control over to AI and have no qualms about fibbing to AI to maintain control over their personal data and their privacy. They treat machines differently. They take actions which would be “a lie” in dealing with another human.
They want something like what military legal experts call “Meaningful Human Control.[i]”
Military Perspectives on the Role and Limits of AI
Autonomy has a long history in military applications. Sperry demonstrated the first autopilot in France shortly before the First World War. When I lecture on AI at universities, I often ask students if Sperry’s 1914 autopilot was an “AI.” The students always say it was not. When I point out the Oxford definition of AI, “computer systems able to perform tasks that normally require human intelligence,” they struggle to say why Sperry’s analog computer is not an AI replacing the expertise of the human pilot. At the end of these sessions, they remain convinced this is a trick question but also remain unable to explain why.
This is like challenges faced by military planners and lawyers who struggle with the use of AI in future conflicts. The military had “smart bombs” in Viet Nam. Most warships have long been protected by a Close-In-Weapon-System (CIWS) which automatically detects and engages threats which pop up faster than a human operator can react.
Are these “AI?” The emerging answer suggests this is the wrong question. Rather than ask “what is AI?,” theorists who debate the law of armed conflict (LOAC) ask a different question. They ask, “What is Meaningful Human Control” of autonomous systems? They suggest the autonomous system should be controlled in such a way as to ensure legal obligations are met.
We[ii] wanted to explore how citizens viewed these topics. So, Lone Star conducted polls of 6,100 citizens in Japan, Germany, the UK, and the US with questions about hypothetical missile attacks. These polls were conducted in 2023 (during the Ukraine-Russia conflict) and again in 2024 (shortly after Iran attempted a mass missile attack against Israel).
Our respondents were more likely to accept computer assistance than computer autonomy. The degree of resistance to autonomy varied with scenario details and by nation.
Japanese respondents were most likely to favor computer autonomy. There was a 25% difference in favorable/unfavorable attributes comparing Japanese respondents with US respondents in 2023. These two national cadres showed the greatest contrast. The differences weakened in 2024, but the gap persists, with US respondents being net-negative and Japanese respondents being net positive.
This Japanese acceptance of AI for defense is similar to polling results about self-driving cars. The Japanese are the society most willing to turn over life and death control to a computer.[iii]
The largest change in attitude among the cadres polled was seen in Germany. Our 2023 polling was much like the self-driving car polling by Taniguchi et al. Germans were skeptical. But in 2024, German respondents swung to a net positive view, perhaps influenced by the proximity of Russian attacks on Ukraine.
UK perspectives were overall negative in both 2023 and 2024 but had moved to nearly neutral.
US views were the least changed, and in 2024, were the most skeptical about military use of computer-controlled conflict.
Respondents were more likely to accept computer control without human oversight in self-defense scenarios. They were overwhelmingly unwilling to allow AI targeting without human intervention.
When asked to pick one policy-level position across all scenarios, the most common choice was “Human action guides the computer, but fast decisions are made by the computer alone.” Only the Japanese preferred “Human action is an option for each major decision.”
The least common choice was, “The computer makes all decisions, based on AI, which humans may not be able to understand.”
Conclusions
Four broad conclusions can be drawn from this research:
- Western democracies are reluctant to accept computer-controlled conflict without “Meaningful Human Controls,” although the degree of desired human checks and balances varies by nation.
- Americans are the most resistant to unsupervised autonomy among the national cadres we polled.
- Attitudes about the risks of self-driving cars seem to be a good proxy for attributes about computer-controlled warfare.
- Unexplainable AI is deeply mistrusted, regardless of other attitudes about autonomy.
[i] For example, see https://mwi.westpoint.edu/what-is-meaningful-human-control-anyway-cracking-the-code-on-autonomous-weapons-and-human-judgment/
[ii] Patrick Ferguson, Eric Haney, Rhiannon McNamara, and the author collaborated to conduct this polling. Rhiannon managed the polling platform and conducted initial data analysis. This research was conducted at the sole expense of Lone Star Analysis.
[iii] Taniguchi, A., Enoch, M., Theofilatos, A., & Ieromonachou, P. (2022). Understanding acceptance of autonomous vehicles in Japan, UK, and Germany. Urban, Planning and Transport Research, 10(1), 514–535. https://doi.org/10.1080/21650020.2022.2135590
Read the full blog series here: