I am a firm believer in technology at the service of sapiens. Technology to improve the quality of our existence, technology that when applied in the right way solves problems and often saves lives. The same technology that, depending on its uses and applications, is capable of unspeakable actions that can even take lives.
Every day, one finds articles in the press which speak of armed drones, ‘crazed’ drones which take flight and go hunting people, drones used by the various armies for missions which are as dangerous as they are deadly. Apart from verifying the truthfulness of the sources, my reflection is not on the merit of the use or on the necessity of creating such devices, but on how we should evaluate the thorny problem of the combination of robotics and artificial intelligence applied to the arms sector.
It seems that taking the life of a human being has become as detached a matter as it is in video games. Slaughterbots, that is one of the names of these machines that combine drones and AI. (watch the video of Slaughterbots). A scenario as dystopian as it is disturbing.
What can we do to prevent the fear of powerlessness over control from becoming an irrational phobia? At the heart of the inquiry is the burning question of whether it is possible to delegate to a machine the choice of kill human beings or not. As emulated in the above video, the occurrence of situations as paradoxical as they are distressing is now a reality.
The assiduous defenders of technology would say that the machine does not make mistakes, and in any case, when it does, it makes fewer mistakes than a human being controlled by his emotions. Yes, this is often the case, but machines and therefore robots can be wrong and if human lives are at stake, the problem is a delicate one.
It must be said that all disruptive technologies are not in themselves evil or benevolent, but man is the author of their virtuous or rather misguided use.
Arguing about the subject of weapons is difficult, as there is a multitude of points of view derived from as many cultural concepts, more or less true, leading to contradictory positions. On the one hand, we have the anti-gun pacifists, who want states without armies.
Today there are about twenty-one states without military forces, mostly very small countries, such as Liechtenstein and Palau, but which nevertheless have defence agreements with Switzerland and the United States of America respectively. On the other hand, we have the die-hard advocates of armies, ready to create and deploy ever more effective devices.
To live in a world without armies, we have to wait for “a few more” years of evolution, when sapiens will finally be able to solve their problems lacking resorting to the use of weapons.
Back to the robotics issue. The market for autonomous weapons is in full development, and Italy has also started a plan to arm the Reaper drones used by the Air Force. Turkey is constantly churning out new models of drones for military use, like the Bayraktar TB2, not to mention the major powers such as the USA, Russia and Japan. It often happens at the beginning of the development of a new technology that the legislation is delayed.
The general drone market in 2020 contracted by 37%, but not the military one. Business is business and private interests typically outweigh the collective good. New and effective international rules are needed to counter the phenomenon linked to the killer drone market.
The complex issue involves cultural, ethical, political, religious, legal and philosophical aspects – in short, a matter for real Homo sapiens. There are several hopeful associations working in this direction, such as the Future of Life Institute, which has many famous names working in science, finance and entertainment are active.
A concrete premise for avoiding a dystopian future.
I will close by borrowing a phrase from Elon Musk that describes the message well: