Why did the godfather of AI turn his back on it?
Geoffrey Hinton was a former employee of Google over the past decade, and he is known to be one of the three godfathers of artificial intelligence. Even more important than that, he won the ACM Turing Award, the Nobel prize for computer scientists for his work on deep learning.
Having stepped away from Google, he claims now that the general public has caught on, it’s the pivotal moment to become aware of possible threats of AI.
The gist of it is that we are now building digital intelligence that could outthink biological intelligence, and that it’s very risky for a more intelligent thing to be controlled by a less intelligent thing.
You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So, the good news is, we’ve discovered the secret of immortality. The bad news is, it’s not for us.
Geoffrey Hinton for The Guardian
At the same time, the Biden administration has revealed its plans to invest $140M into AI research and development, with around 25 facilities across the country.
Hinton also stated that he didn’t sign the petition to stop further development of AI because he believes that’s not the solution; even if the US did stop developing it, other countries, such as China or Russia, could continue to do so.
On that note, he said that his core concern is AI falling into the hands of ‘bad players’ who would want to use it for creating the so-called autonomous weapons.
“Let’s say that Putin wants to create an autonomous soldier, with a goal to kill a certain person. But to do so, the autonomous soldier needs to create subgoals, for example, he needs to get to a certain road. How do you know that what he’s going to do to get to that road won’t be bad for other people? Or even for you, who initiated the task?”, he wondered in an interview for CBS Mornings.
When asked what we could do about it, he answered that the best way could be something like a Geneva Convention but, to persuade the authorities, there needs to be a public outcry.
You need to imagine something more intelligent than us by the same difference that we’re more intelligent than a frog. And it’s going to learn from the web, it’s going to have read every single book that’s ever been written on how to manipulate people, and also seen it in practice.
Geoffrey Hinton for The Guardian
Mass media and certain influential figures have been especially active in spreading fear and panic in the public regarding the potential of AI. However, the author of “Mathematical Intelligence”, Junaid Mubeen said that these warnings should be taken seriously considering where they are coming from.
“I wouldn’t take Elon Musk’s warnings particularly seriously because he’s not an AI expert, but Geoffrey Hinton is one of the pioneers. The concerns that he’s raised around existential risk are to be taken seriously. We’re not talking about the Terminator doom date scenario but it’s the idea that these technologies are able to amass information and process them so effectively that, if you put them in the hands of bad human actors, they can wreak all kinds of havoc,” he said in an interview for BBC News.
Hinton pointed out that maybe AI’s limitations will show once it’s read all documents online but has no access to citizens’ private data.
“This development is an unavoidable consequence of technology under capitalism,” he concluded.