ASK AN EXPERT

You are here

Rajan Kumar Mishra asked: What are the legal and ethical issues related to the use of Lethal Autonomous Weapon Systems?

  • Share
  • Tweet
  • Email
  • Whatsapp
  • Linkedin
  • Print
  • Atul Pant replies: Though lethal autonomous weapon systems (LAWS) are envisaged to be immensely advantageous in future conflicts, there is an intense debate on its many legal as well as ethical aspects. Lack of legal definitions of many of the terms associated with LAWS is the first problem. Terms like human control, unmanned military systems and many others are yet to be commonly and acceptably defined, blocking universal understanding and agreeability of issues. The next is the absence of specific convention or protocol providing internationally acceptable regulatory framework governing LAWS, since the field is still in its early stages of evolution

    The main ethico-legal opposition to LAWS is the preclusion of human control over the lethal systems (autonomy) and absence of human in the autonomous decision-making process by the system or weapon computers. There is a view that with humanity content and prudence aspect likely to be missing, such weapon systems may result in unnecessary and cold blooded killing and destruction or disproportionate use of force. LAWS, unlike automated systems, must be able to respond to situations that are not pre-programmed or anticipated prior to their deployment. This is where the things could again go wrong. Artificial intelligence (AI) employed in LAWS or its controlling systems may take wrong decisions in some situations leading to undue lethality or destruction. Laws of armed conflict and other international conventions could easily be violated

    Another legal issue is of accountability, i.e., in case of undue killing or destruction or disproportionate use of force, who is to be held accountable and blameworthy. Such catastrophes may result from software flaws, hardware malfunction, AI’s wrong decisions, besides human fault in employment of LAWS, especially in situation overload. It will be very difficult to affix the blame in such cases. At times a design glitch or flaw may also lead to incorrect employment. Such glitches are not uncommon. Whether the system designer or software designer or human employer or the maintenance team is to be held responsible will be very difficult to fix, especially during fog of war or dynamic conflict where it will be even more difficult to ascertain facts

    There are also concerns on AI’s ability to distinguish between civilians and combatants before lethally engaging them. This could be potentially destabilising since AI could be cheaply mass produced and their autonomous response against enemy targets may cascade into escalation of conflict. AI could also become dangerous terror means in future.

    For more on the subject, please refer to my following IDSA publication:

    Atul Pant, “Future Warfare and Artificial Intelligence: The Visible Path”, IDSA Occasional Paper, August 2018

    Posted on May 31, 2019

    Top