Deep learning pioneer Geoffrey Hinton has quit Google

Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today. According to the Times, Hinton says he has new fears about the technology…
Deep learning pioneer Geoffrey Hinton has quit Google

The 75-year-old computer scientist has divided his time between the University of Toronto and Google since 2013, when the tech giant acquired Hinton’s AI startup DNNresearch. Hinton’s company was a spinout from his research group, which was doing cutting-edge work with machine learning for image recognition at the time. Google used that technology to boost photo search and more.  

Hinton has long called out ethical questions around AI, especially its co-optation for military purposes. He has said that one reason he chose to spend much of his career in Canada is that it is easier to get research funding that does not have ties to the US Department of Defense. 

“Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google,” says Google chief scientist Jeff Dean. “I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well.”

Dean says: “As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

Hinton is best known for an algorithm called backpropagation, which he first proposed with two colleagues in the 1980s. The technique, which allows artificial neural networks to learn, today underpins nearly all machine-learning models. In a nutshell, backpropagation is a way to adjust the connections between artificial neurons over and over until a neural network produces the desired output. 

Hinton believed that backpropagation mimicked how biological brains learn. He has been looking for even better approximations since, but he has never improved on it.