Office System

Singer cited as an example the deployed automated artillery system in Afghanistan. (This) system reacts and shoots. We can turn off the system, we can activate it, but our power really isn’t decision. It is the power of veto now, says. To all this is added the concern that if automated systems are taking decisions, how can we be confident that are attacking the right targets and obeying the laws of war? The antecedent, the American academic Patrick Lin was recently awarded manifests also the task of studying the ethics of robots, a work commissioned by the Office of Naval Research of the United States armed forces. When we speak of autonomous robots, he argues, a natural response might be programmed to be ethical. Isn’t that what we do with computers?.

A striking example of a robot that needs a careful programming is the vehicle without a driver developed by the Pentagon, called the EATR. You can refuel fuel by itself on long after collecting trips organic matter, which raises the disturbing situation of a machine consuming corpses on the battlefield. Its inventor, Dr. Robert Finkelstein, Robotic Technology, insists that it consume organic, but mostly plant material. The robot can only do what is programmed to do, has a menu adds.

All this worries sceptics, as Professor Noel Sharkey, co-founder of the Committee International Control of armed Robots, who says that the decision to kill must remain in human hands. One can train all you want, give you all the ethical rules of the world. If the contribution is not good, it is not good at all. Human beings can be held accountable, machines do not. If one cannot rely on a robot to distinguish between enemy and innocent non-combatants, Lin suggests another solution. If there is an area of combat so intense that it can be assumed that someone is not a combatant, he argues, then release the robots in this type scenario.

Comments are closed.


© 2011-2024 Journey For Hope All Rights Reserved