The coming proliferation of automated weapons technologies and machine learning programs, paired with the removal of the warfighter from warfighting, presents a daunting challenge to those concerned with jus in bellum and jus ad bellum morality. Just last month, IRGC Brigadier General Mohsen Fakhrizadeh, was assassinated in Tehran by what Iranian officials claim was a satellite controlled automatic firearm, equipped with facial recognition software. Though this claim from the Iranian National Security Council has drawn scrutiny from numerous security experts, it has also spurred debate on the inevitable repercussions of the introduction of fully AI operated weapons systems. Whether there is any truth to the Iranian account remains to be seen; however, it is near certain that lethal autonomous weapons (LAWs) will begin to appear in conflict zones across the globe as China, Russia, Israel, the U.K. and the U.S. funnel billions towards these capabilities. Both domestic governments and supranational organizations must respond to these ever-expanding initiatives and establish a fundamentally new way of interacting with AI to bolster Just War Theory. As horrible as human loss is in conventional warfare, it is even more unsettling to imagine lethal decision-making capabilities being delegated to a software program.
That deeply disturbing image of an efficient, remorseless “kill-bot” is central to the contemporary discussion, no matter how sci-fi it might seem. The technology that will allow such development is already quite familiar to most of us here in the U.S., appearing in everything from the newest self-parking cars to the most popular snapchat filters. We are quite comfortable with the presence of learning automation in our daily lives, even when it comes to higher risk activities such as the operation of public transit systems. So, what changes when a similar AI is controlling a firearm? It is the same phenomenon that causes people discomfort with self-driving cars, technology that necessarily includes programming that allows the AI to calculate a moral maximum when presented with multi-layered situations involving human injury or death. This essentially means that if presented a scenario such as the hackneyed trolley cart problem, the vehicle would arrive at a perceived morally optimal conclusion, just as a human would. The key difference between the two scenarios is that the latter can be held responsible for their action whereas the former cannot, as it lacks both intention and autonomy. All considerations can be reduced to a simple binary. Here it is necessary to clarify a semantic point: AI, as commonly discussed in the context of LAWs is not a sentient consciousness born of technological innovation, but rather a system that is able to learn and adapt using a base set of instructions provided by a program. Therefore, while an LAW AI program may arrive at a conclusion through a process of intake, analysis and machine learning, the ability to do so is predicated on a predetermined set of instructions.
Using this limited definition of AI, many instances of autonomously capable weapons systems are identifiable today. However, the common thread is that these technologies are primarily defensive in function, ranging from US Phalanx CIWS to missile “dome” networks. These are, of course, programmed to autonomously identify a set input (action) and perform a certain, directed task (reaction), typically aimed at neutralizing a manifesting threat. In the process of this response, the defensive LAW is not engaging in any kind of selection function where it “decides’’ to kill or not kill. Instead, the entirety of its action is predetermined and limited to its instructions based on input values. Offensive LAWs could be something else entirely. AI systems that are enabled to seek out targets and, more concerningly, make a calculation as to the appropriateness of a kill based on sensory details, remove responsibility from human actors and leave the adversary with no avenue of reciprocal engagement with a moral peer. It is a system that is given control over identification of the target, method of engagement, and most crucially, appropriateness of lethality. This combination of capabilities turns a LAW into a lethal autonomous robot (LAR).
Heather Roff writes of responsibility in her work “Killing in War: Responsibility, liability, and lethal autonomous robots,” sorting through the potential moral interaction of programmers, military leadership and civilian governments with the actions of LARs. She argues in her conclusion that “the creation and use of LARs leaves us with the perverse outcome that no one can be held legally responsible for their actions…” (359). For an abundance of reasons, primarily predicated on lack of ownership over and inability to influence the operation of AI weapons systems while in action, all three categories of individuals are ruled out as morally responsible for the LARs activities (359). Programmers, though providing the framework for machine learning to occur, are not engaged in an ongoing or consenting relationship with the LARs AI system, creating a “responsibility gap” that absolves the programmer from immediate moral liability (356). In a similar manner, military chain of command is absolved from direct responsibility for actions by an LAR, a premise which Roff develops by citing the lack of a superior-subordinate relationship or effective control between commanders and AI technologies, leaving the former free to abscond from moral liability (357). The only group that approaches the threshold of responsibility for actions of LARs is civilian leadership (i.e., policy makers), who have the agency to both select and implement policies that include AI in warfighting capabilities. However, holding individual members of an elected body or an appointed staff responsible for the activities of a LAR operating in a distant combat theater is rightfully described by Roff tenuous at best, as there exists no meaningful norm of establishing this manner of responsibility within an extranational legal framework (359). In short, the usage of LARs leaves Just War theorists with no clear path to discern the owner of the consequential legal or moral responsibility.
At first glance, this condition may appear to render Just War theory inutile in its interaction with AIs future, relegating its applicability to conflicts between clearly identifiable and liable parties. What to do with a weapons technology that acts as an active combatant, which—following the logic laid out above—transfers no moral responsibility to creators or maintainers. The cannon of Just War Theory has historically expanded and matured to meet challenging developments such as terrorism, chemical weapons usage and asymmetrical warfare; The introduction of AI to the battlespace will be no exception to this pattern. It is necessary then that adjustments to interaction with LAR-using states must occur primarily within a jus ad bellum context. The crux of this issue is that, as we have previously established, actions taken by LAR do not morally implicate actors of the state from which they were deployed. Therefore, a state that endures acts of violence by way of AI technologies in LAR systems may not be justified in its targeting of the aggressor state, as that entity cannot be held morally or legally responsible for decisions made by its LARs. Effectively, using this admittedly narrow definition of responsibility, a state would be able to function as a clear and obvious aggressor, while reserving a somewhat esoteric distance from legal liability for the decision-making of its AI technology. In regard to proportionality, the receiving state would find it difficult to provide adequate legal rationale of just response, within the current jus in bellum paradigm (save for employing AI technologies to counter, which further complicates the morality ordeal). To use a handy analogy from Clausewitz, the aggressor not only forces the hand of their adversary, but does so outside of the game, pressuring their adversary to break the rules (Walzer, 23). Consequently, if states are able to practice aggressing warfighting methods via LARs without the conventionally associated accountability, it is necessary to address the proliferation of these technologies in the context of jus ad bellum.
Certainly, this definition of responsibility could be limited in practical implementation, as it seems quite clear that a government releasing “kill-bots” is wholly responsible for the havoc that they wreak. But this logic exercise does inform the measures that must be taken in regard to bolstering any general justification for preemptive action against AI developing adversaries. AI technology, when paired with warfighting, is the slippery slope of morality where human life becomes very difficult to value and even harder to defend—both legally and in practice. However, Just War Theory provides a framework for determined measures to counter the availability and lethality of AI weapons systems. It is necessary to view LARs and advanced AI weapons as tools of mass destruction, ones that will allow aggressors to take life without accountability. Allen Buchanan and Robert O. Keohane provide a cogent defense of preventative war, a necessary tool in combating the proliferation of these technologies. By engaging with human globally minded partners, a coalition of international actors could pressure rogue states and economic powers alike to halt the development of these technologies by threat of justified, preventative intervention (21-22).
Successfully maneuvering through this moral obstacle is not only critical to national interest but imperative in reprioritizing human rights in the 21st century. The advancement of weapon automation could very easily destabilize relative global peace and exacerbate existing low-level conflicts. As LARs become a legitimate warfighting capability of technologically advanced actors, it is essential that nations with conscience require accountability for others and pursue strident action as a collective. Refusal to condemn the moral grey areas created by AI innovation will only allow aggressors more power in predation, necessitating a strong consideration of very basic tenets of Just War Theory.
Bibliography
Buchanan, Allen, and Robert O. Keohane. “The Preventive Use of Force: A Cosmopolitan Institutional Proposal.” Ethics & International Affairs, vol. 18, no. 1, 2004, pp. 1–22.
Roff, Heather M. , “Killing in War” , in Routledge Handbook of Ethics and War ed. Fritz Allhoff , Nicholas G. Evans and Adam Henschke (Abingdon: Routledge, 08 Jul 2013 ), pp. 352-364
Walzer, Michael, “The Crime of War.” Just and Unjust Wars, Allen Lane, 1978, pp. 23–23.
