Should we be worried about 'killer robots'? | War & Conflict | Al Jazeera

Should we be worried about 'killer robots'?

Autonomous weapons might one day make war more efficient but they present deep-seated practical and ethical challenges.

by
    The Taranis stealth aircraft, which will be able to autonomously complete missions, is currently being tested by BAE Systems [BAE Systems]
    The Taranis stealth aircraft, which will be able to autonomously complete missions, is currently being tested by BAE Systems [BAE Systems]

    Campaigners are renewing calls for a pre-emptive ban on so-called "killer robots" as representatives of more than 80 countries meet to discuss the autonomous weapons systems. 

    The use of lethal autonomous weapons systems (LAWS) is "a step too far", said Mary Wareham, the global coordinator of the Campaign to Stop Killer Robots.

    "They cross a moral line, because we would see machines taking human lives on the battlefield or in law enforcement.

    "We want weapon systems and the use of force to remain under human control," Wareham said. 

    Wareham spoke to Al Jazeera before Monday's meeting in Geneva, Switzerland on a possible ban on LAWS. 

    This is the fifth international meeting to discuss so-called "killer robots" since 2014, but no formal decisions will be taken yet as countries are still working towards a common definition of LAWS, and have yet to agree on whether they should be outlawed in international law. 

    "We're at a crossroads now. This is going to be a crucial year. If we do not move swiftly, we could end up in a situation where it's too late and where fully autonomous weapons proliferate to the extent that every country has them," Wareham told Al Jazeera. 

    Beyond the Terminator

    While the term "killer robot" might bring to mind scenes from the science fiction franchise Terminator, the "walking, talking humanoid type" autonomous weapons are "not what we are principally concerned with", Wareham said.

    Fully autonomous weapons systems are those that select and engage their targets without meaningful human control. Today, the "most serious ones" the Campaign to Stop Killer Robots is concerned with are not yet in existence said Wareham.

    But weapons with at least some degree of autonomy are in use already, and according to Human Rights Watch (HRW), more than a dozen countries, including the US, China, Israel, South Korea, Russia and the UK, may be developing them. 

    In a recent report, HRW pointed to Israel's Iron Dome, which can independently detect and shoot down incoming missiles, as one of the existing "precursor systems" on the road to autonomy. 

    South Korea has recently deployed the Samsung SGR-A1 robot sentry gun in the demilitarised zone (DMZ) between the two Koreas; it uses cameras and sensors to detect any intruders in the DMZ and can then shoot the intruder, reportedly only once given the go-ahead by a soldier at the command centre.

    And the UK's BAE defence contractor is testing a stealth aircraft, the Taranis, which can autonomously complete missions. The plane is still just a prototype, however, and BAE has said that should the Taranis take flight in real operations, "they will at all times be under the control of highly trained military crews on the ground".

    Not all bad

    "Autonomous weapons are not wholly bad," said Jacob Turner, a lawyer and author of the forthcoming book, Robot Rules.

    "They offer potential advantages in terms of being able to distinguish between civilians and combatants more effectively than a human operator could," Turner told Al Jazeera.

    "They also don't get tired, or frustrated or angry or shell-shocked in the same way that humans do."

    While AI technology needs to take some leaps before it gets there, some say it is not inconceivable that robots might eventually get better at selecting their targets than humans are, meaning war could be waged more efficiently, with fewer innocents killed. That could also spell the end to some of the atrocities associated with war, such as sexual violence - unless they are programmed to perpetrate such acts. 

    These kinds of weapons systems are going to be stupid and they're going to be indiscriminate

    Mary Wareham

    But to Wareham, this promise is not enough. 

    "The state of technology today and what we're looking at in the near term, these kinds of weapons systems are going to be stupid and they're going to be indiscriminate.

    "These weapon systems might be able to abide by international law in the future, but we don't see that now and we're not satisfied with waiting to see if that's the case. We want to see action taken now," she said.

    Third warfare revolution

    Toby Walsh, a professor of AI at the University of New South Wales, Australia, conceded that there are some good uses of AI in the battlefield: "Clearing minefields is a perfect job for a robot," he told Al Jazeera.

    Last Wednesday, Walsh put killer robots in the headlines, however, when he led a boycott by more than 50 AI and robotics professors against a South Korean university for their collaboration with Korean arms company Hanwha Systems.

    The Korea Advanced Institute of Science and Technology (KAIST), which is known for its work in robotics, swiftly rejected allegations that they were "joining the global competition to develop autonomous arms" and issued a statement denying there would be any "research activities counter to human dignity, including autonomous weapons lacking meaningful human control".

    The boycott was called off on Monday. 

    Walsh painted a dire picture of what autonomous warfare could look like after what some have called the "third revolution in warfare".

    "Previously if you wanted to do some harm you would have to have an army of people, you have to train them and equip them and persuade them to do whatever your intent was," he told Al Jazeera.  

    "If you take humans out in any meaningful way then you can scale [weapons] like our computer systems. You can keep on buying more CPUs, buying more robots. You don't need any more humans, that's not holding you back," he added. 

    "You could fight war on a much greater, industrial scale."

    If you take humans out in any meaningful way then you can scale [weapons] like our computer systems

    Toby Walsh, Professor of AI, University of New South Wales

    Categorically different

    The concerns about LAWS reach beyond the practical objections of their scalability, 'stupidity' and hackability.

    Ryan Jenkins, assistant professor in philosophy at California Polytechnic State University in the US, said he believes that LAWS are "particularly worrisome" because they seem "categorically different" from previous weapons systems.

    "It's not just a weapon that's a little bit better at killing adversaries. It's not just a missile with a longer range. [Campaigners] see it as a categorical break in the way that war is being waged and the question that they raise is a deeply unsettling one, whether it's permissible to delegate the task of killing humans," he said.  

    [Killer robots] don't get tired, or frustrated or angry or shell-shocked

    Jacob Turner, lawyer and author

    Even if technology does advance to the point that LAWS are able to wage war more efficiently than humans, Jenkins believes the principle of outsourcing human killing to robots might be so chilling that societies could decide it is preferable to let more innocent people die.

    He adds that waging war with robots could have profound effects on the broader political landscape and create "simmering resentments" that might adversely affect the prospects for international peace.

    "Look at drones as an analogy. Is it wrong for a nation to prosecute all of its war by remote aircraft? ... We've seen the kinds of tensions that it gives rise to."

    Ban or regulate?

    So far, 22 countries including Brazil, Pakistan and Egypt, have called for a ban on LAWS. The US and Russia, Wareham believes, are likely to prove the "biggest challenges" in outlawing the technology.

    Jacob Turner believes that regulation, rather than a ban, is the way to go in order to prevent asymmetric proliferation.

    "We're only able to call for blanket bans in countries which have open legal systems and which have open scrutiny of what their military is doing," he said.

    But Wareham disagreed. She pointed at previous weapons bans such as the 1997 ban on landmines and the convention on cluster munition as success stories. As of January 2018, 164 countries were signatory to the former while 120 states are currently party to the latter.

    "Of course, yes, there will be cheating," she said, adding: "my experience is that once [countries] join up to a treaty they take their obligations extremely seriously." 

    Whether the ban will ever come about is unclear. Representatives will meet again in August to further discuss details about autonomous weapons and produce a report that should include recommendations on the path forward. 

    "We do not want inconclusive talks that lead nowhere," Wareham said. 

    "We don’t have the funds or the time to wait and the technology is bounding ahead so we’re looking for more countries to come on board."

    SOURCE: Al Jazeera News


    ABOUT THE AUTHOR



    YOU MIGHT ALSO LIKE

    Interactive: Coding like a girl

    Interactive: Coding like a girl

    What obstacles do young women in technology have to overcome to achieve their dreams? Play this retro game to find out.

    America's Guns: Secret Pipeline to Syria

    America's Guns: Secret Pipeline to Syria

    How has the international arms trade exacerbated conflict in the Middle East? People and Power investigates.

    I remember the day … I designed the Nigerian flag

    I remember the day … I designed the Nigerian flag

    In 1959, a year before Nigeria's independence, a 23-year-old student helped colour the country's identity.