A man widely considered the “godfather” of artificial intelligence (AI) says he quit his job at Google to speak freely about the dangers of the technology.
Geoffrey Hinton recently spoke to The New York Times and other press about his experiences at Google, and his wider concerns about AI development. He told the Times he left the search engine company last month after leading the Google Research team in Toronto, Canada for 10 years.
During his career, the 75-year-old Hinton has pioneered work on deep learning and neural networks. A neural network is a computer processing system built to act like the human brain. Hinton’s work helped form the base for much of the AI technology in use today.
In 2019, Hinton and three other computer scientists received the Turing Award for their separate work related to neural networks. The award has been described as the “Nobel Prize of Computing.” The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
In recent months, a number of new AI technologies have been introduced. Microsoft-backed American startup OpenAI launched its latest AI model, ChatGPT-4, in March. Other technology companies have invested in computing tools, including Google’s Bard system. Such tools are known as “chatbots.”
The recently released AI tools have demonstrated the ability to perform human-like discussions and create complex documents based on short, written commands.
Speaking to the BBC, Hinton called the dangers of such tools “quite scary.” He added, “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon will be.” He said he believes AI systems are getting smarter because of the massive amounts of data they take in and examine.
Hinton also told MIT Technology Review he fears some “bad” individuals might use AI in ways that could seriously harm society. Such effects could include AI systems interfering in elections or inciting violence.
He told the Times he thinks AI systems could create a world in which people will “not be able to know what is true anymore.”
Hinton said he retired from Google so that he could speak openly about the possible risks of the technology as someone who no longer works for the company. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review.
Since announcing his departure, Hinton has said he thinks Google had “acted very responsibly” in its own AI development.
In March, hundreds of AI experts and industry leaders released an open letter expressing deep concerns about current AI development efforts. The letter identified a number of harms that could result from such development.
These included increases in propaganda and misinformation, the loss of millions of jobs to machines and the possibility that AI could one day take control of our civilization. The letter urges a halt to development of some kinds of AI.
Turing Prize winner Bengio, Apple co-founder Steve Wozniak and Elon Musk, leader of SpaceX, Tesla and Twitter signed the letter. The organization that released the letter, Future of Life, is financially supported by the Musk Foundation.
Musk has long warned of the possible dangers of AI. Last month, he told Fox News he planned to create his own version of some AI tools released in recent months. Musk said his new AI tool would be called TruthGPT. He described it as “truth-seeking AI” that will seek to understand humanity so it is less likely to destroy it.
Alondra Nelson is the former head of the White House Office of Science and Technology Policy, which seeks to create guidelines for the responsible use of AI tools. She told The Associated Press, “For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers.”
Nelson added that she hopes the recent attention on AI can create “a new conversation about what we want a democratic future and a non-exploitative future with technology to look like.”
I’m Bryan Lynn,