Rise of the machines?  Why a drone shot a human on its own initiative

The use of drones has gone from being exclusive to militarily advanced countries such as the United States to one more tool within other lower level armies. We do not have to go far to know that countries like Spain have various more or less advanced models and others like Morocco are in the process of acquiring new drones.

But this technology is being taken much further. In the current military industry we can find fully autonomous drones linked to fighters, intelligence capable of collecting information and others that, directly, are programmed to kill people without human intervention.

The latter plunges into a very swampy moral, ethical and legal terrain with many questions and few answers. For example, who is responsible if a drone autonomously kills a person. Since no one could – for the moment – be related to that death.

US autonomous drone

USAF

Omicrono

This, which may seem more typical of a science fiction film, is one of the issues addressed by the United Nations Security Council in a report issued on March 8 on the use of autonomous drones in Libya against Haftar’s troops by Turkey. “Lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the ammunition: in effect, a true ability to shoot, forget and find,” they point out from the UN.

This fact collected in the report supposes the first time a drone autonomously attacks humans, although without confirmed casualties. Or at least it is officially known. No one on the other side was operating the drone, much less had their finger pressed on the fire button.

Who shoots?

“The question here is: we are in systems where intelligence is increased and the human plays a primary role in the decision [de dispara] or we have already moved to a scenario in which the decision is left to the system, “said Josep Courto, a specialist in Big Data, when he spoke to us about weapons that fire alone.” And if so, what ethical and moral considerations have been taken into account“.

Kargu drone responsible for autonomous attacks in Libya

Kargu drone responsible for autonomous attacks in Libya

STM

The latter is the key to this whole affair and something what some companies that manufacture this type of systems are working on together with the various public entities and agencies in charge of purchasing weapons.

One of the countries that is most advanced in the matter and the one that sets the world compass is the United States. So much so, that he was one of the first to comment on this technology in 2012. In November of that year, the Department of Defense published a directive that included the country’s policy regarding the use of automatic weapons. “Autonomous and semi-autonomous weapons systems will be designed to enable commanders and operators exercise appropriate levels of human judgment on the use of force“, as collected.

“Persons who authorize the use, direct or operate autonomous and semi-autonomous weapons systems must do so with the proper care and in accordance with the laws of war, applicable treaties, safety rules of the weapons system and the applicable rules of engagement, “says the Department of Defense itself.

Kratos XQ-58A Valkyrie, autonomous drone, launching a drone

Kratos XQ-58A Valkyrie, autonomous drone, launching a drone

Along the same lines and in 2016, the then Pentagon undersecretary of defense, Robert Work, declared that “We will not delegate lethal authority to a machine to make a decision”. With an important ‘but’: “[excepto] when there are things that go faster than the reaction time of a human, like cyber warfare or electronics. “

The Defense Innovation Board of the United States Department of Defense published in 2017 a document that contains the principles and recommendations for the use of Artificial Intelligence in its area. The first principle it publishes is the responsibility for technology and human involvement in exercising “appropriate levels of judgment and remaining accountable for the development, implementation, use, and results of AI systems.”

This same policy seems to be the one followed by a significant number of countries in the world that, for the moment, have not seen fit to cross the fine line of the first law of robotics applied by Isaac Asimov in his novels. In the various presentations of autonomous drones with AI, the United States has remarked that it continues to bet on this policy of control, execution and supervision of attacks.

Although the North American country is also alerting that possibly some armies do not take advantage of this unwritten law and choose to program their drones with attack tasks without human intervention. In the case of Turkey against Haftar, it is clear that it was the Turkish air forces that carried out the attack, but it is impossible to identify a person directly responsible.

All the controversy created around has in turn generated a social movement that materialized in 2013 with the Campaing to Stop Killer Robots, a coalition of various non-governmental organizations against lethal autonomous weapons or LAWs, from its acronym in English.

Against LAWs

What happened in Libya opens the door – and the debate – of the use of autonomous drones or any LAWs (Lethal Autonomius Weapons or Autonomous Lethal Weapons) without a clear legal framework. A fact that was reflected in a July 2015 when 1,000 experts in Artificial Intelligence signed a letter warning of the threat of different weapons with AI and calling for the prohibition of autonomous weapons.

Among the signatories, the first swords of the technological and scientific industry were found as Elon Musk, Stephen Hawking, Steve Wozniak and various executives from companies such as Skype or Google. The content indicated the various problems of democratic control of the war that the use of these autonomous systems would imply. For example, when condemning war crimes contained in International Humanitarian Law and in the Geneva Convention.

The suicide Turkish drone

The aircraft referred to by the UN Security Council is the STM Cargo-2, a quadcopter made in Turkey that is part of the so-called loitering drones. “Kargu is a rotary wing attack drone that has been designed for asymmetric or counterterrorism warfare operations”, as defined by the STM itself on its website. And they are part of the Turkish air force.

Its technology is capable of attack both static and moving targets and it achieves this through its advanced real-time image processing system and built-in machine learning algorithms. Thanks to which it can operate both day and night, it has target tracking systems, advanced navigation systems, 10x optical zoom and different ammunition options.

Perhaps the latter is what makes it a somewhat dangerous drone capable of carrying an explosive charge in its small but lethal cargo hold. In terms of specifications, it has a range of 5 kilometers for a maximum of 30 minutes at a maximum altitude of 2,800 meters above sea level and a maximum speed of 72 kilometers per hour.

With a weight of just over 7 kilograms and an operating temperature range from 20 below zero to 50 above zero, its function will be to identify the target and attack it. Gets it exploding very close to its target to be beaten, be it humans, vehicles or infrastructure.

According to the United Nations report, “the unmanned combat aerial vehicles and the small intelligence, surveillance and reconnaissance capacity of drones available to the forces affiliated with Haftar were neutralized by electronic interference thanks to the Koral electronic warfare system. “

You may also like…

.

Disclaimer: If you need to update/edit/remove this news or article then please contact our support team Learn more

Leave a Reply