Autonomous Weapons and the Cardinal Principles of IHL

Getting your Trinity Audio player ready...

In 2020, a drone strike was conducted against Libyan National Army forces by weapons systems with no known humans “in the loop”.1Russell et al, Lethal Autonomous Weapons Exist; They Must Be Banned This slaughterbot, known as the STM Kargu-2, remains fully operational even when GPS and radio links are jammed, and is equipped with facial recognition software to target humans.2Ibid 5 This is the latest form of autonomous weaponry which signals the third revolution in warfare after gunpowder and nuclear arms. While there is no universally accepted definition of ‘lethal autonomous weapon systems’ (LAWS), it is acknowledged that these weapons have enormous destructive capability with full autonomy in that there is no manual human control over the system. U.S. policy on autonomy in weapons systems, defines LAWS as “weapon system[s] that, once activated, can select, and engage targets without further intervention by a human operator.”3Report to Congress on Lethal Autonomous Weapon Systems The UK’s Department of Defence defines LAWS as weapon systems capable of human-level cognition.4Russell et al, Lethal Autonomous Weapons Exist; They Must Be Banned (supra n.1) LAWS, once activated by human intervention, can make judgements about the viability of targets and operate without further instructions.

These features have sparked a debate on their compliance with International Humanitarian Law (IHL) as well as their moral, legal, and strategic standing within national security applications. A 2021 report by the American Congressional Research Service states that “there are no domestic or international legal prohibitions on the development of use of LAWs.”5Russell et al, Lethal Autonomous Weapons Exist; They Must Be Banned (supra n.1) However, as of July 2018, 26 states supported a preemptive ban on LAWS.6ICRC, The Martens Clause and the Laws of Armed Conflict U.N. Secretary-General António Guterres wrote on Twitter that “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”7Autonomous weapons that kill must be banned, insists UN chief This article shall explore the legitimacy of such weapons in IHL discourse, while highlighting any internal tensions.8 Russell et al, Lethal Autonomous Weapons Exist; They Must Be Banned (supra n.1) In conclusion, it argues that with appropriate and sufficient testing and human oversight mechanisms in place, LAWS can be a suitable alternative to human fighters, while complying with IHL.

Principle of Humanity

The Martens’ clause of the Hague Convention of 1899, which is also in Additional Protocol I of the Geneva Conventions, states that even when not covered by other laws and treaties, civilians and combatants “remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of human conscience.”9ICRC, The Martens Clause and the Laws of Armed Conflict (supra n. 6) It is argued that since LAWS remove humans from life-and-death decision-making and lack human conscience themselves, they are unable to empathise with, or respect human dignity. Without compassion, LAWS contradict the principle of humanity which seeks to minimise human suffering.10 Docherty, The Need for and Elements of a New Treaty on Fully Autonomous Weapons Devoid of human conscience, these self-governing LAWS cannot engage in essential decision making on the battleground directed by human morality.11HRW, Heed the Call; A Moral and Legal Imperative to Ban Killer Robots Basing their actions on pre-programmed algorithms, LAWS are unable to fully appreciate the value of a human life and the significance of its loss. Although they are not specifically prohibited, the Martens clause provides human conscience as a guiding principle when no formal prohibitions are present in the international law.

On the other hand, however, it may be argued that human conscience is subjective; what might sound inhumane to one person might be absolutely acceptable to the other. Moreover, as Sassoli rather convincingly argues, “[o]nly human beings can be inhuman and only human beings can deliberately choose not to comply with the rules they were instructed to follow…A robot cannot hate, cannot fear, cannot be hungry or tired and has no survival instinct. Robots do not rape.”12 Sassòli, Autonomous Weapons and International Humanitarian Law Indeed, an autonomous weapon system may better comply with the dictates of humanity by virtue of not being human. Retaining human oversight over systems which are better able, through algorithms or the like, to abide by the cardinal principles of IHL may be a way to ensure that human dignity is protected.

As a response, the US has argued that since in some cases there could be humanitarian benefits to LAWS, a ban now before those benefits have been explored, would be “premature.”13Evans, Too Early for a Ban: The U.S. and U.K. Positions on Lethal Autonomous Weapons Systems It may be worth exploring the potential of LAWS to better comply with the principle of humanity (with appropriate human oversight) before banning it outright to fully ascertaining its potential for respect of the law.

Principle of Distinction

According to IHL’s principle of distinction, the parties to a conflict must at all times distinguish between civilians and combatants in order to spare the civilian population and civilian property. Attacks may be made solely against military objectives.14Article 48, Additional Protocol I to the Geneva Conventions, 1977 – 48 Even with conventional warfare, it is an increasingly complex task to distinguish civilians from combatants, as combatants increasingly choose to merge within civilian populations. So, the distinction between civilians and combatants remains difficult by visual means or with LAWS’ AI technology.15AJ, Islamic State in Khorasan Province, ISKP (ISIS-K) Due to the intermingling of civilians and combatants, it is increasingly difficult for LAWS to effectively distinguish between civilians and combatants, resulting in indiscriminate killing and a violation of the principle of distinction. Docherty argues that “the ability to distinguish combatants from civilians or those hors de combat often requires gauging an individual’s intentions based on subtle behavioural cues, such as body language, gestures, and tone of voice. Humans, who can relate to other people, can better interpret those cues than inanimate machines.”16 Docherty, The Need for and Elements of a New Treaty on Fully Autonomous Weapons However, it could also be said that with extensive training and research, LAWS can be trained to distinguish precisely between combatants and non-combatants e.g. infants and unarmed individuals and to be able to compute subtle behavioural cues on an almost human level. How plausible this is, only time and technological advancement will tell.

This is made difficult by the fact that AI algorithms are based on data sets they were trained to operate on, and these data sets are not fully accurate. As studies have shown that facial recognition AI programs are 100 times more likely to misidentify Asian and African American people than white people, LAWS might be racially or ethnically biased.17Harwell, Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use They are not only capable of indiscriminately killing civilians and combatants but can also be accused of killing more brown and black civilians. Moreover, assumptions about men’s roles in warfare may miscategorize civilian men as combatants due to deeply embedded gender biases among human operators, and hence, the LAWS.18Gender and Killer Robots Such gendered facial recognition systems in LAWS could make men, regardless of their actual combatant or civilian status, hyper-visible as targets.19Harwell, Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use (supra n. 17) For this reason, a gendered and racial lens would have to be applied in ensuring LAWS’ compliance with the principle of distinction.

Principle of Proportionality

The principle of proportionality seeks to limit the damage caused by military operations by requiring that the effects of the means and methods of warfare used must not be disproportionate to the military advantage sought.20ICRC, Proportionality This principle involves the important determination of whether expected civilian harm outweighs anticipated military advantage, on a case-by-case basis, in a rapidly changing environment.21 Human Rights Watch and IHRC, Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban It is not just a matter of quantitative calculation, but requires application of human judgment, informed by legal and moral norms and personal experience, to the specific situation.22Limits on Autonomy in Weapon Systems, Identifying Practical Elements of Human Control Since LAWS are not programmed to deal with the infinite number of unexpected situations on the battlefield, they are not equipped to make a judgement on proportionality of the attack-which runs an insurmountable risk of violating the principle of proportionality.23Docherty, The Need for and Elements of a New Treaty on Fully Autonomous Weapons One example of such a situation is mass fratricide, with numerous weapons turning on friendly forces. Such scenarios can be caused through hacking, enemy behavioral manipulation, unexpected interactions with the environment, or a technical malfunction.24Autonomous Weapons and Operational Risk, Ethical Autonomy Project Of course, if LAWS are programmed to deal with maximum possible scenarios in the battlefield, they may serve as a viable alternative to human soldiers.

Indeed, it is likely that states will rely on LAWS more and more as time goes on. With fewer human fighters, less human casualties would occur on the ground, and national security can become a less costly exercise. For example, the US Department of Defense says that it costs the Pentagon about $850,000 per year for each soldier in Afghanistan. Contrarily, a small rover equipped with weapons costs roughly $230,000.25Autonomous Weaponry: Are Killer Robots in Our Future? In light of this, it seems a more worthwhile approach to improve the compliance of these weapons rather than try and ban them completely.

Because of their sentient limitations, LAWS require human beings to remain ‘in the loop’ for application of qualitative and subjective elements which are context dependent. To further refine this position, a comprehensive definition of military objects as well as civilian will be needed, to demarcate the subject of attack.26Devine, Principle of Distinction vis-à-vis Artificial Intelligence: Beginning of a Bloodless Revolution? Similarly, programming them to respect the principle of proportionality may also force States to agree on how exactly this must be calculated.27Sassòli, Autonomous Weapons and International Humanitarian Law (supra n. 12) Finally, in a conflict scenario devoid of civilians’ presence e.g., in space or the deep seas, LAWS cannot be, by definition, in violation of the principle of distinction or proportionality and can be used.28 Boulanin et al., Limits on Autonomy in Weapon Systems

Principle of Precaution

This principle requires taking feasible measures to ensure that civilians or civilian objects are not being attacked, by choosing weapons and tactics to minimise incidental injury and collateral damage.29Article 57, Additional Protocol I to the Geneva Conventions, 1977 – 48 For LAWS to comply with the principle of precaution, it is important to first determine the acceptable probability of error in their judgement. An attack must be cancelled or suspended if it becomes apparent that the objective is not a military one or is subject to special protection or that it would violate the rule on proportionality.30Autonomous Weapon Systems under International Law The question is, are LAWS able to make such judgements on a case-by-case basis? In my view, this is the principle where human oversight over LAWS becomes the most important as there is a cogent need for the option that a ‘human override’ is possible so that the attack complies with the precautionary principle. If a commander overseeing the system realises that the target is not a military object or that the attack would be in any other way unlawful, he or she must be able to cancel the attack.

If in the future, LAWS are programmed to minimise errors in exercising precaution, they may be able to render valuable services in warfare while satisfying the principle of precaution. As Thurnher has pointed out, “whereas human judgement is often clouded by anger, revenge, fear, and fatigue, autonomous machines may offer the promise of removing those worries from the battlefields of the future”.31 JS Thurnher, Feasible Precautions in Attack and Autonomous Weapons LAWS’s ability to comply with the principle of precaution must be tested regularly during the development phase, so that the chances of violation of IHL are minimised and humans must be factored into their use to allow for an ‘override’ should the need arise.

Problem of Deciphering Accountability and Penalty

International humanitarian law requires that individuals be held legally responsible for war crimes and grave breaches of the Geneva Conventions. The contention is that machines are not subject to legal rules. Human Rights Watch writes that it would be unclear who would be held accountable for unlawful actions a robot commits, “Options include the military commander that deployed it, the programmer, the manufacturer, and the robot itself, but all are unsatisfactory. It would be difficult and arguably unfair to hold the first three actors liable and the actor that actually committed the crime, the robot, would not be punishable.”32Human Rights Watch, Losing Humanity: The Case Against Killer Robots

Philosopher Robert Sparrow has also argued that autonomous weapons are causally but not morally responsible, similar to child soldiers. There is a risk of not knowing who to hold responsible, which renders IHL crippled.33Robert, Killer Robots The accountability problem shows that LAWS are a present danger conflicting with IHL. However, a new treaty specifying whether the operator, creator, potential intervener, or the machine itself will be held responsible, can mitigate this problem. Human control will also naturally render the controller responsible, which also makes the issue simpler to settle.

To Ban or to Regulate?

With a huge potential to violate IHL and a gap in legislative protections available against LAWS, many have sought a complete ban or regulatory limitations on their use. The NGO, Campaign to Stop Killer Robots, has called for a “comprehensive, pre-emptive prohibition on fully autonomous weapons.” To ensure ‘both humanitarian protection and effective legal control’, humans must remain in control.34 Stopping Killer Robots, Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control On the other hand, Article 36, an organisation arguing that the principles of humanity ‘require deliberative moral reasoning, by human beings, over individual attacks’, has called for a new legal instrument that explicitly prohibits weapons that do not allow ‘meaningful human control’.35Brehm, Autonomous weapons: targeting people should be prohibited It has also defined “meaningful human control”; humans not computers and their algorithms should ultimately remain in control of, and thus legally responsible for relevant decisions about (lethal) military operations. This shall not only alleviate the responsibility gap for LAWS, but also mitigate specific violations of the principles of IHL arising from their use.36Meaningful Human Control over Autonomous Systems: A Philosophical Account However, this endeavour requires extensive moral and legal debate as well as technological innovation, to align the accountability test of LAWS with expected and acceptable human behaviour.

Conclusion

There is great potential to render use of LAWS more compliant with IHL than the current mechanisms of warfare. To sum up a few advantages; precise and targeted attacks to lessen civilian casualties, traceable accountability, and reduced loss of lives of armed soldiers. To allow LAWS to minimise casualties and vulnerabilities, a legally binding instrument should bind states parties to clear obligations.37Docherty, The Need for and Elements of a New Treaty on Fully Autonomous Weapons (supra n. 16) With meaningful human control, and direct intervention of human judgement through oversight mechanisms, the accountability problem is mitigated, and so is the probability of violation of principle of proportionality and precaution. LAWS may be capable of making warfare more humane by allowing for algorithmic decisions not affected by flawed decision making that is a virtue of being human.

Disclaimer

The opinions expressed in the articles on the Diplomacy, Law & Policy (DLP) Forum are those of the authors. They do not purport to reflect the opinions or views of the DLP Forum, its editorial team, or its affiliated organizations. Moreover, the articles are based upon information the authors consider reliable, but neither the DLP Forum nor its affiliates warrant its completeness or accuracy, and it should not be relied upon as such.

The DLP Forum hereby disclaims any and all liability to any party for any direct, indirect, implied, punitive, special, incidental or other consequential damages arising directly or indirectly from any use of its content, which is provided as is, and without warranties.

The articles may contain links to other websites or content belonging to or originating from third parties or links to websites and features in banners or other advertising. Such external links are not investigated, monitored, or checked for accuracy, adequacy, validity, reliability, availability or completeness by us and we do not warrant, endorse, guarantee, or assume responsibility for the accuracy or reliability of this information.

Mishal Murad

Mishal Murad completed her LLB from London School of Economics and is currently working as a legal associate in Lahore. Interested in understanding the intersection between law and policy, she launched a UK based start-up for online news source called Global Telegram. You can reach her at [email protected]  and view her profile at https://www.linkedin.com/in/mishalmurad/.