skip to main content

No ‘human-out-of-the-loop' for autonomous weapons, says new European Parliament report

Artificial intelligence (AI) cannot replace human decision-making in military operations, while any “human-out-of-the-loop” arrangement for lethal autonomous weapon systems (LAWS) must be banned internationally, according to the European Parliament in a new report.

The report urges the EU to take a leading role to promote a “global framework” on the military use of AI.

A new European Parliament report rejects “human-out-of-the-loop” for autonomous weapons. (Getty Images)

A new European Parliament report rejects “human-out-of-the-loop” for autonomous weapons. (Getty Images)

“LAWS should only be used as a last resort and be deemed lawful only if subject to human control, since it must be humans that decide between life and death,” the European Parliament’s Legal Affairs Committee said in a 10 December statement after the approval of its new report on the military and civil uses of AI.

The new self-initiative resolution (2020/2013 INI) on the “interpretation and application of international law to AI in the areas of civil and military uses and of state authority” was authored by French Member of the European Parliament (MEP) Gilles Lebreton.

In his report, Lebreton stated that AI systems should be designed to enable humans to correct or disable them in case of unforeseen behaviour.

“All military uses of AI must be subject to human control so that a human has the opportunity to correct or halt them at any time, and to disable them in the event of unforeseen behaviour,” he wrote, adding that the decision-making process “must be traceable, so that the human decision-maker can be identified and held responsible where necessary. Humans should therefore be identifiable and ultimately held responsible.”

Looking to read the full article?

Gain unlimited access to Janes news and more...