“A robot must not injure a human or, through inaction, allow a human to be harmed.” Azimov’s first rule of robotics
From autocorrect to social media algorithms, artificial intelligence (AI) is everywhere. Technology is growing exponentially – it may soon overtake human intelligence. In many ways it can be a force for good. But what happens when the military develops AI to kill autonomously? The campaign to stop killer robots takes a stand.
Armed artificial intelligence: As dangerous as nuclear weapons?
Elon Musk believes artificial intelligence is far more dangerous to the future of humanity than nuclear weapons.
That claim seems questionable given the long-lasting devastating aftermath of the Hiroshima and Nagasaki bombings — but the game changes when it comes to weaponized artificial intelligence.
Right now, the future of humanity precariously depends on the willingness of world leaders to “push the button,” an action that could lead to nuclear war.
The decision to kill millions is in human hands, and there have been many near misses.
In each of these cases, the human emotions of doubt, curiosity, or common sense prevented that decision from being made.
The difference between a human and a robot is that a human can change their mind.
Faced with the task of killing another human being, artificial intelligence doesn’t have that kind of humanity.
It’s not that these robots are evil – they just don’t know what a human is. They don’t appreciate human life and don’t understand what it means to destroy a soul.
They’re metal and wires, a binary on-off system that either works or doesn’t. When artificial intelligence is programmed to kill, there is no gray area, no room for reconsideration.
The campaign to stop killer robots
Out of this dystopian landscape grows the campaign to stop killer robots.
They recently launched a new website https://automatedresearch.org/ that provides reports and updates on the deployment of weaponized robotic technology.
Right now, the military is claiming these robots are “here to help people.”
Jodie Williams, spokeswoman for the Stop Killer Robots campaign, gives a chilling answer: “And then they will help people kill.”
For years, the military has psychologically trained soldiers to kill without remorse. Just read Gwynne Dyers, The shortest war history.
Using techniques ranging from using humanoid-shaped targets for target practice to marching to the beat of “Kill. Kill. Kill.” it would be illusory to think that the military would not consider using killer robots.
By comparison, to be fair, programming a robot to kill is probably more ethical than brainwashing a person.
What do you think?