Published Date : Jun 09, 2016
One of the major concerns most scientists have had since the inception of robotics is robots taking over human beings one day in the distant future. Considering the growing use of robots in a number of industries, scientists today are actually compelled to say that this concern presents a real threat and as a result, they are developing an artificial intelligence kill switch.
Preventing Intelligent Machines from Overriding Humans
Stuart Armstrong, from the University of Oxford’s Future of Humanity Institute, and Laurent Orseau from Google DeepMind recently published a paper on how there is a way to prevent intelligent machines of the future from learning how to completely taking over humans and disregarding their inputs. This will ensure that human beings are always in control of machines. Speaking to BBC, Dr. Orseau admits that although it is absolutely justified to be concerned, the current state of knowledge does not require the world to be afraid of robots taking over the humans. He states that it is, however, important to begin working on the safety of artificial intelligence before a very real and tangible problem arises. The need for artificial intelligence safety is to make sure that the learning algorithms continue working the way they are needed to.
Kill Switch to Allow Human Intervention in Cases of Emergency
The research project by Armstrong and Orseau is focused on developing reinforcement methods that ensure artificial intelligence machines can be intervened by the people who manage them, preventing the machines from learning how to avert or overcome human interventions. The paper, titled “Safely Interruptible Agents,” suggests that every now and then it may become imperative for human operators to resort to the big red button in order to stop the agent from going ahead with a sequence of actions that might be harmful in the near or distant future. It is important to make sure that the agents do not disable the red button and disregard human attempts to interrupt or stop their functioning.