Rule of Law, and Military History and “Meaningful Human Control”

Humanity has a violent history. Apparently, our violent ancestors were eager to seek justification for their violence. Our violent history has a parallel history of debate about “just war” and “lawful war.”

A key question today is whether the law can constrain the use of Artificial Intelligence in armed conflict. Movies like The Terminator series have conjured up concerns about out-of-control killer robots.

This paper (the second of a series) examines some history of efforts to restrain nations in conflict and some issues around the restraint of military AI.

A History of Efforts to Constrain Warring Nations and New Technology

Today, the US and many other nations have clearly codified standards for lawful conduct during conflicts.  It has not always been so. While opinions differ, the mainstream view dates the meaningful “Law of Armed Conflict” (LOAC) or “Law of War” to 1863. Before 1863, certain codes might have restrained individuals or armies. Codes of chivalry laid out the proper behavior of a knight. Some religious texts (the Bible, the Koran) provide rules guiding a few aspects of conflict. Sun Tsu (sixth century AD) and earlier Sanskrit manuscripts dealt with issues like treatment of prisoners.

But in 1863, Professor Francis Lieber of Columbia College wrote the Lieber Code. A revised version was issued as General Order No. 100 of the Union Army during the American Civil War. It sought to limit the damage caused by war and to regulate matters like the treatment of prisoners. It called for the “Protection of persons and especially of women, religion, the arts and sciences.”

An international conference in Geneva, Switzerland, was also held in 1863. It led to a series of resolutions, conferences, declarations, and agreements. Eventually, in 1874, something recognizable as the “Laws of War” had been created but attempts to ratify it failed. Eventually, a conference in The Hauge led to regulations which were adopted in 1899, 1906, and 1907.

So, while the U.S. and some other nations operated under self-imposed regulation of conflict as early as 1863, the larger international community lagged about 50 years. Interestingly, the 1907 agreement failed to include regulations on the use of poison gas. The emerging technologies of dirigibles, aircraft, and gas were too new and, therefore, too difficult for the parties to grapple with.

World War I broke out soon after the Hauge regulations had been adopted. The horrific slaughter of The Great War led many nations to seek better rules. This led to the 1929 Geneva Convention, which became the standard for World War II and beyond.

The Geneva Convention is famous. Many old war movies mention it. But less well-known was a treaty signed in 1928. The Kellogg-Briand Pact was a “General Treaty for The Renunciation of War.” The parties who ratified this pact all promised to renounce war and to seek a peaceful settlement of “all disputes or conflicts of whatever nature or whatever origin they may be…”

The Kellogg-Briand Pact was ratified in August 1928. Three years later, in September 1931, someone blew up a section of the Japanese-owned railroad in Manchuria. The Japanese proceeded to “respond” by conquering Manchuria, undeterred by the 1928 pact they had signed despite international condemnation. The conquest included an assault on Shanghai in 1931 and later the “Rape of Nanjing” in 1942. These actions violated nearly every clause of every treaty Japan had signed.

Japan is hardly alone in ignoring LOAC obligations. Many other examples could be provided. But it should be clear how hard it is to create international restrictions on warfare, how long it takes to agree, and how easy it is to ignore any agreement, rule, or regulation.

We also see the US unilateral adoption of the Lieber Code in 1863 did not motivate other nations to follow its example quickly.

Perhaps as important is the difficulty restraining the use of innovative technology. Air warfare and chemical weapons were understood during discussions of 1906 and 1907, but no meaningful restraints resulted, though 1899 and 1907 agreements did forbid the use of “poison or poisons weapons.”

During World War I between 1915 and 1918, gas attacks probably caused more than 1 million casualties and about 90,000 deaths. Today, many nations still maintain military units specifically trained to deal with Chemical, Biological, Radiological, and Nuclear (CBRN) threats. China alone reportedly has more than a dozen CBRN brigades.

So, while arms control is a necessary goal in the context of LOAC, history shows how hard it is to adopt and enforce workable restraints on technology.

One hint at whether meaningful controls can be achieved is the security needs of a nation:

  • In 1915, Germany had an advanced chemical industry and needed a breakthrough.
  • After World War II, it became clear all signatories had cheated on the 1922 Washington Naval Treaty, which aimed to constrain the race to build bigger warships. All nations felt the need to cheat while giving lip service to compliance.
  • 164 nations signed the 1999 Ottawa Mine Ban Treaty, but nations who need a barrier to risky neighbors (South Korea, India, Pakistan…) have not.
  • The Convention on Cluster Munitions became international law in 2010. China, India, Russia, and seventy other nations have not signed, presumably for tactical or strategic needs.

None of this bodes well for constraining AI, for imposing a standard of “Meaningful Human Control,” or for any other limitation on Military AI. Much has been made of the Responsible AI in the Military Domain Summit (REAIM 2023) in the Hague. And the United States has issued a declaration supporting it. But none of the statements to date are binding.

Because AI is not a physical thing, like a chemical plant or nuclear refinement facility, enforcement of an AI treaty seems impossible. One potential agreement would be to ensure that at least nuclear weapons remain firmly under human control. Inspection protocols already in place might be expanded to support this restraint. A broader ban or controls on AI in conventional warfare seems unlikely to be achievable. And even if enacted, history suggests full compliance would be unlikely.

However, some treaties do help moderate the behavior of nations. While the 1922 Washinton Naval Treaty was widely violated, it was given more than lip service. Nations entering the treaty hoped to avoid a costly arms race in capital ships like the one preceding World War I. The economic realities of the 1930s did not support a full-blown arms race either. So, cheating was largely limited to factors such as displacement (building bigger ships than permitted) but did not extend to the widespread building of larger fleets.

AI deployment will probably be constrained by the perceived risks and realities of each nation. This will vary widely. The choices made by democracies will include considerations of what the voters can accept and perhaps what can be classified so voters are not burdened with knowledge.

Should the United States and Western Democracies Constrain Autonomy?

Despite the likelihood many nations will cheat in restraint of autonomy, it makes sense for Western Democracies to impose their own limits. Just treatment of prisoners of war is an example of a constraint, often self-imposed, hoping to influence an opponent. Autonomy is similar.

Constraint on autonomous controls of nuclear weapons is probably the most important and obvious place to begin. AI is neither mature enough nor dependable enough to be trusted with launch decisions for any part of the US nuclear triad. “Reliability” in AI terms usually refers to the quality of training data, not system reliability. How system reliability might be assessed would be controversial. Since avoiding an accidental nuclear war is very desirable, keeping the trigger away from AI seems uncontroversial, even if it is a unilateral commitment.

On the other end of the spectrum the US and their allies should avoid any commitment to constrain the use of AI for assessing software-controlled threats. With the advent of software-controlled systems, observable signatures can be changed instantly. Humans cannot hope to respond to continuous changes in networks, signals, and, in some cases, physical signatures. Most mainstream AI cannot do this either, but some promising advances show this to be feasible (outside the mainstream). Humans will need AI to provide automated assessment and, in some cases, automated targeting.

There is a vast grey zone between these two extremes. That seems likely to be where military scholars, diplomats, and lawyers will debate and struggle. This is not a “one size fits all” assessment. Japan and Taiwan, though democracies, are likely to see the world differently than Germany or the United Kingdom.

Conclusions

History teaches constraints on military technology move slowly. Even when nations agree, treaties are “made to be broken.”

There are areas where Western Democracies should agree to impose controls on autonomous systems. But there are some military needs where any constraint is not in any nation’s interests. In the gray zone between these two extremes, we must expect years (perhaps decades) of confusion about LOAC applied to autonomy and AI.

In the meantime, democracies should be prepared to wage war despite the lack of a clear legal framework. Further, we must expect some adversaries will reject any meaningful controls on autonomy. The history of LOAC and arms control treaties shows cheating is the norm, and perhaps the most honest participants are those who refuse to participate.

The U.S. may not have the luxury of opting out of autonomy controls. Likewise, the U.S. should not assume other parties will abide by such controls. And the United States should assume others in the Five Eyes and NATO will choose different limitations.

Read the full blog series here:

Article One

Article Two

Article Three

Article Four