Google is Reportedly Working on a ‘Big Red Button’ to Stop Rouge AI
Machines are getting so intelligent these days that they can solve a lot of problem without us having to lift a finger. Artificial Intelligence has become so advanced, it can already take power over us, for instance, defeat us at complicated board games like Go. But, What if it goes off the rails? What if the robots stop listen to us? Folks like renowned astrophysicist Stephen Hawking and Tesla mastermind Elon Musk think that could result in fierce activity beyond human control. Google, one of the companies that seem to champion the concept of AI, now is keen to prevent this sort of thing from happening.
DeepMind, the company Google bought for about $580 million in2014, collaborated with the researchers at Oxford University in order to create a framework that will stop AI agent from learning to prevent humans from taking control. In other words, a kill switch or button to keep the software in check. The team has already published a paper named “Safely Interruptible Agents” on the website of Machine Intelligence Research Institute (MIRI).
“If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation,” reads the paper.
The researchers claim that they’ve already developed a framework, which lets a human operator to safely and continually disband an AI, also at the same time, makes sure the AI doesn’t learn to impede the interruptions.
“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this,” the researchers wrote.
Don’t consider it as a one stop shop for all AI. You can find some algorithms like Q-learning safe to interrupt while others a bit complicated. Also, it’s still not sure that Deep Mind’s interruption mechanisms could be applicable to all algorithms, but you can consider it a start.
Read Orginial Story Here