in

Former Google CEO Warns AI Could Endanger Humanity Within Five Years


“After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that.”

Grim Projections

In his latest grim artificial intelligence forecast, ex-Google CEO Eric Schmidt says that there aren’t enough guardrails to stop the technology from doing catastrophic harm.

Speaking to a summit hosted by Axios this week, Schmidt, who is now the chairman of the National Security Commission on Artificial Intelligence, likened AI to the atomic bombs the United States dropped on Japan in 1945.

“After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that,” he told Axios cofounder Mike Allen during the exchange at the website’s A+ Summit in DC. “We don’t have that kind of time today.”

Although those building the technology, from OpenAI to Google itself and far beyond, have established “guardrails” or safety measures to rein the tech in, Schmidt said he thinks the current safeties “aren’t enough” — a take that he shares with many machine learning researchers.

Within just five to 10 years, the former Google boss said, AI could become powerful enough to harm humanity. The worst case scenario, Schmidt continued, would be “the point at which the computer can start to make its own decisions to do things,” and if they are able to access weapons systems or reach other terrifying capabilities, the machines may, he warns, lie to us humans about it.

To head off that kind of horrific outcome, Schmidt said that a non-governmental organization akin to the United Nation’s Intergovernmental Panel on Climate Change (IPCC) should be established to “feed accurate information to policymakers” and help them make decisions about what to do when and if AI gets too powerful.

Boss Fight

While the former Google boss has regularly publicized his concerns about AI, Meta’s AI czar Yann LeCun has increasingly taken the opposite stance.

Last month, he told the Financial Times that the tech is nowhere near smart enough to threaten humanity on its own, and over Thanksgiving weekend, he got into a spat with fellow AI pioneer Geoffrey Hinton — who notoriously quit Google earlier this year about his AI concerns — over the concept that large language models (LLMs) are sophisticated enough to “understand” what humans say to them.

“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities,” LeCun told FT, “which we don’t have at the moment.”

While all these smart and accomplished men keep issuing opposite signals about the dangers of AI, it’s hard to tell how scared to be. For now, it seems like the answer may, like with so many other things, lie somewhere in the middle — but as the goalposts keep moving, finding that measured position is as hard as ever.

More on AI reality checks: Facebook Researchers Test AI’s Intelligence and Find It Is Unfortunately Quite Stupid

What does the future hold for generative AI? | MIT News

What does the future hold for generative AI? | MIT News

Amazon will offer human benchmarking teams to test AI models

Amazon will offer human benchmarking teams to test AI models