2003: They asked Google drones, and they showed them pictures of planes that looked like science fiction, that they said they could fire a distant enemy without risking the life of any pilot. 2014: If you are looking for a drone on Google, you will soon be seen as citizens of Afghanistan or Yemen screaming at the rally to denounce the ravages these deadly planes have committed. Today, Erlamando is a fundamental component of military strategies, regardless of the multiple applications that have been found to them for the rest of their lives.
The U.S. Deputy Secretary of Defense, Ashton Carter, announced in November 2012 a new step with the signing of Directive 3000.09: “Strategy for designing, developing, buying, testing, organizing and using a system of autonomous and semi-autonomous weapons that employ lethal force.” Given that politicians are the masters in flouting the most terrible realities in plain words, the formula proves that robots capable of killing another man without human intervention have green light.
Bulletin of Atomic Scientists does not produce sensationalism. In this newsletter, which lists key information and opinions on the other risks that threaten world peace, as well as nuclear risks, which do not oppose civil nuclear energy, Mark Gubrud has published “US killer robot policy: Analysis “full speed ahead” (U.S. killer robots policy: go ahead at full speed).
Gubrud is a researcher at Princeton University and a member of the International Committee for Robot Arms Control.
“Autonomous and semi-autonomous weapons that will use lethal force”: in this distinction the key is in the small step that a deadly weapon has become completely autonomous from what the human mandate is.
According to Gubrud, so far the US military leaders have always denied that autonomous weapons, robots, were being prepared and the US prepared. There were a large number of fighters inside the army against them. It goes without saying that the majority of the population, because even the well-known conventional weapons remember the massacres that result in the loss of human control, such as the deadly pains left for years on the battlefields. In this context, the Government has already eliminated in 2009 the $300 billion programme for the organization of new robots and drones.
But within the Army, the campaigns for killing machines were continuous. In military journals, it was argued that the ones that most limited the effectiveness of the new instruments of war were human beings themselves, that war was becoming too rapid and complex a war for humans to manage from within. Not forgetting the argument of budgets: robots will come out cheaper than humans. The new directive makes sense and opens the way for supporters of the recognition of the autonomy of war machines.
The military had realized that, by 2003, increasingly independent machines could not guarantee compliance with the international humanitarian laws of war, which requires separating combatants in attacks by the civilian population and not causing excessive damage to civilians.
Others claim the opposite, that well-programmed robots will have a more ethical behavior in a battle than human beings who live conditioned by emotions. According to Gubrud, there is great controversy among the experts, although the majority recognize that the machines will hardly separate civilians and gudaris, and that at the moment they cannot decide well the proportionality of the force they must use to neutralize the alleged armed enemies.
To preserve ethical minima, some have proposed that robots have permission to attack other enemy machines and materials, including their robots, but not humans. Or that machines should ask for a man's authorization to defeat a certain degree of violence in their attack. The death of innocent civilians caused by drones in Iraq, Afghanistan, Pakistan, etc., over the past ten years has shown that it is only a stupid fantasy to think that an army will thus program its robots.La U.S. Army Directive 3000.09
includes the following sentence: “Autonomous and semi-autonomous weapons will be created with an appropriate human level on the force to be used to facilitate access to controls and operators.” Although it seems that the sinuous writing wants to clarify something, it doesn't say that there is a human behind a decision to kill a human, the machine can do it. A leader can send a robot to a mission, leaving it to decide who to attack to fulfill its mission.
The distinction between automatic and semi-automatic weapons is very diffuse. They're semi-automatic like drones, man has chosen his prey, programmed the machine and sent it out to undo it. The point is that with the advancement of technology, the machine has an increasing capacity to select what needs to be done in different situations. If at first the pilot or remote boss had to give the order to shoot the enemy found by this machine -- with the current advances, why does the machine have to waste time and opportunity for the man to ask for the O.K. ?
On the other hand, as is the case today with drones and remote controlled missiles, when a human being has chosen the enemy to destroy, when ordered to do so to a supposedly semi-autonomous machine, once the operation has begun, is that intelligent ammo not de facto autonomous?
In this new time that has opened, says Gubrud, autonomous machines with technology to kill their enemies, have the way open. Robots, I mean.
Mark Gubrud is a member of the International Committee for the Control of Robotic Weapons (ICRAC). The Stop Killer Robots Campaign is organized in collaboration with different personalities. They argue that: “Giving machines the ability to decide who dies and who lives on the battlefield is unacceptable the use of technology. Human control of all war robots is essential to protect humanitarian laws and ensure legality. There is an urgent need for a total preventive ban on fully autonomous weapons.”
When you work with older people or people with physical and neural diversity, you realize that the idea of competition in our society limits us a lot as a species. That is, our system puts you in value by doing things specifically, and what it doesn't do is incapable of the... [+]
The presence of Elon Musk in the media advances like a rocket after landing in the garden of the White House. Other powers, apparently, have been altered by the power and influence it is acquiring, and to reduce its influence, have charged the X network. In recent weeks, media... [+]
On the Internet comes the title of a movie that I still saw when I was growing up looking for the word Willow. In this fantasy film, the protagonist, a small man named Willow, transformed the world by liberating its inhabitants from an oppressive kingdom. Google just launched a... [+]
Perhaps one of the weaknesses of human beings is the tendency we have to listen to and care for the majority. It has certainly been an important feature of the development of our species and necessary for survival. But with digitalization, that characteristic that we have makes... [+]
In Bilbao, I worked for five years with groups at risk of exclusion around the digital divide, especially with women. Along the way, I came across machistan violence and many other problems. In a very organic way, I began to relate to myself and to understand the work of the... [+]
The evolution that the Internet has taken over the last 15 years, together with its technological and business model, makes us think that it is a tool to increase the worst aspects of humanity. Around the world, agents have been created that are not satisfied with this idea... [+]