Risks in Current AI Adoption Strategies Among the Western Allies

Autocracies are rapidly deploying AI to support their national objectives. The West is constrained by concerns around the Law of Armed Conflict (LOAC), other laws, and by voter concerns about the risks of autonomy. Obviously, there are risks to self-imposed restraints. But asymmetry in AI adoption and applications is not the only risk faced by the West. This article (the third in the series) addresses some risks which policymakers and military strategists might consider.

A Catalogue of Risks: Military Autonomy and Artificial Intelligence

Risk #1 – Legal Asymmetry. Perhaps the most discussed risk is the asymmetry between the US and autocracies, Russia and China in particular. This includes both LOAC asymmetry and other differences in legal restraint on governments.

Some legal asymmetries are known. Our adversaries are not constrained in the collection of personal data within their own states. They are also unfettered in the data they obtain from our citizens and companies. Some of this is stolen, and some is simply gathered through means such as commerce. So, legal asymmetries exist and extend far beyond the laws of conflict.

The persistent collection of information means China probably knows at least a dozen things about nearly every person in the United States and many other nations. This rich trove is undoubtedly being used for AI training, creating options for hostile action should China opt to use them. Such a trove violates the laws of the UK, the EU, and the US. So, China knows more about western citizens than their own governments. Further, it is unlikely Western nations know as much about Chinese, Iranian, Russian, or North Korean citizens, despite the Intelligence Community’s efforts to match China’s data acquisition.

We should also assume any restraints the West adopts to ensure potential adversaries will not adopt Meaningful Human Control (MHC).

Beyond major threats (China, Russia…), other nations are also unrestrained by the legal restraints of the West. Cyber operations can be very cost-effective. They can be wielded with precision by autocratic regimes and non-nation state actors. The growing use of AI for deception and precision targeting of cyber operations provides an asymmetric advantage for an adversary who seeks to exploit the huge “attack surface” any democratic nation offers.

Risk #2 – Born Digital. Emerging threats were, for the most part, “born digital.” We should assume (for example) all China’s radars, communications links, and networks are “software-defined” and able to be configured at will. This ability to instantly reconfigure is a challenge to any AI-aided intelligence system monitoring China. North Korea, Iran, and Russia have older defense infrastructures. However, the war with Ukraine is providing Russia with lessons on priorities for changes.

While Western militaries have software defined networks, radios, and radars, none of these systems were “born digital” and thus carry legacy training tactics, techniques, and procedures (TTPs) perpetuating old constraints.

The pervasive digital capability of China’s military provides opportunities for a wide range of AI use cases which can be developed, deployed, and refined at a speed the West is probably unable to match. China’s partners can choose the areas where they want to close their gaps from not being “born digital.”

Risk #3 – Born Networked. Most of China’s military has been built in the last 20 years. In that time, China has become a major provider of networking systems. Meanwhile, no purely American provider of large-scale networks has survived the telecom mergers of the same period.

So, China has two advantages; their systems were born networked, and they have a domestic provider with advanced digital networked capabilities (Huawei) able to provide network systems to support and deploy AI at any scale.

Risk #4 – Mimicking the Wrong Role Models. Much attention has been given to Silicon Valley and “Big Tech” as potential inspirations to help define the future of military AI in the West.

However, the nature of data at Walmart, Amazon, Google, and Facebook is much different than any DoD data. It is different in scale, in cost, in completeness, and different in purpose.

Likewise, the allowable error rates are much different when suggesting a target or recommending an egress route versus recommending a cat food or web page.

Western defense organizations can and should learn from commercial firms, but consumer-focused, retail and advertising firms do not have all the answers military organizations need.

Some defense applications can use mainstream commercial AI. Large Language Models (LLMs) can be useful for many applications. Some machine learning methods have error rates too high for autonomous target selection but might be good enough to process facial recognition at a security perimeter.

One size does not fit all. However, it is dangerous to assume mainstream AI methods can be successful in military applications.

Risk #5 – Software Defined Systems Speeds. The 21st century has ushered in “software-defined everything.” Networks, radars, radios, telecommunications, and nearly every other electronic aspect of modern life are controlled by software, and hard-wired functions are fading into history.

During the Cold War, most military electronic systems could change operating parameters (or “signatures”) by some physical means, such as changing a crystal or throwing a switch. But change was limited and constrained by training, coordination, and physics.

Today, networked warfare systems can change nearly every aspect of their operational parameters. Further, such changes can be continual.

The dynamic nature of modern warfare system signatures has double implications for AI.

First these rapid changes mean AI is needed to assess and identify adversary assets. Human analytics have no hope of keeping up.

Second, no labeled examples from peacetime will be available to train an AI using the most favored methods of the commercial world.

Dependence on automatic systems, which may not be “explainable” is a direct challenge to “Meaningful Human Control” and hence a potential asymmetry to any nation who constrains actions, and whose AI is based on widely adopted commercial AI methods.

Regardless of Western hesitation about the lack of “Meaningful Human Control,” the West must avoid suicidal self-constraints which other nations will not share.

Risk #6 – Reliance on Open-Source Libraries. As long suspected, we now have unclassified evidence of China’s efforts to manipulate Western Open-Source software libraries. At the same time, China seeks to create their own isolated Open-Source system (presumably shared with their partner nations).

This calculated asymmetry aims to increase the risks and costs of software in the West and lower it for China and their partners.

Because Open-Source AI is algorithmic, inserted exploits are difficult to detect. Unlike open-source server software, where an exploit might be a hidden “back door,” algorithms can perform as advertised… until they do not.

Nearly all algorithms include implicit assumptions. All require some type of computational implementation when they are reduced to software. All must run on computers with certain computational limits.

This means compromised AI can pass nearly all the quality and integrity tests one can imagine but will still reliably malfunction when stimulated with the proper input. Much like a hypnotist can plant a suggestion which only becomes manifest in the presence of a trigger word, AI can malfunction when confronted with a data pattern. From a malware point of view, this “bug” is the feature.

It seems safe to assume these risks extend to Open-Source AI libraries. Since every major cloud and platform provider in the West relies primarily on Open-Source AI, this should concern military AI planners.

Risk 7 – Failing to utilize the advantages of the capitalist economy – The market for AI will be vast compared to the market in defense applications. Market forces will create many alternatives and ongoing improvements to AI economics. If defense customers erect barriers to entry, the market will pass by, and defense will have, at best, second-tier technology.

Risk 8 – Failing to leverage the dynamic diversity of Western markets – Defense organizations naturally migrate to standards and standardized systems. In commercial markets, this is done as well, but increasingly, commercial systems are open. Interfaces are defined, but “how to build it” is not. If DoD falls prey to a standardized or one-size-fits-all approach to AI, it fails to capitalize on the rich and growing diversity of AI in our economies. Governments and their prime contractors should seek out the best AI solutions and be ready to shift to new improvements as they emerge.

Risk #9 – Underestimating the sheer size of the AI risk matrix. Each of the eight risks described above is a serious concern. Taken together, they offer a large game space for nations to seek different forms of advantage.

The size of the risk matrix (or game space) is large, with each nation in the space potentially exploiting the eight risks above, plus some others. So, each nation pairs off with another nation in a space eight by eight or larger.

Moreover, the space is not just the West as a bloc vs. Sino-Russia as a bloc. As seen in the 5G rollout and efforts to hinder Huawei, each ally may take a different position.

Based on Lone Star’s polling, as described in the first paper in this series, it seems likely different democracies will take different approaches to military AI uses and AI constraints.

Few nations seem to be seriously considering the complexity and size of the risk matrix. This means that both threats and opportunities to exploit weaknesses are likely to be missed.

Conclusions

The risks described are not a complete list. They illustrate the kinds of AI considerations which strategists and policy planners should consider.

Seriously exploring these risks can help yield the highest leverage options for deterrence, defense, and risk hedging.

Ignoring these AI risks is not a viable option.

Read the full blog series here:

Article One

Article Two

Article Three

Article Four