Focue Provides the Latest and Most Up-to-Date News, What You Focus On is What You Get.
⎯ 《 Focue • Com 》

Humans risk extinction from AI, Deepmind and OpenAI warn

2023-05-30 17:00
The heads of two of the leading AI firms have once again warned of the existential threat posed by advanced artificial intelligence. DeepMind and OpenAI chief executives Demis Hassabis and Sam Altman pledged their support to a short statement published by the Centre for AI Safety, which claimed that regulators and lawmakers should take the “severe risks” more seriously. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read. The Centre for AI Safety is a San Francisco-based non-profit which aims “to reduce societal-scale risks from AI”, claiming that the use of AI in warfare could be “extremely harmful” as it could be used to develop new chemical weapons and enhance aerial combat. Signatories of the short statement, which did not clarify what they think may become extinct, also included business and academic leaders in the space. Among them were Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”, and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI. The list also included dozens of senior bosses at companies like Google, the co-founder of Skype, and the founders of AI company Anthropic. AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want. Experts say the technology could take over jobs from humans – but this statement warns of an even deeper concern. The emergence of tools like ChatGPT and Dall-E have resurfaced fears that AI could one day wipe out humanity if it passes human intelligence. Earlier this year, tech leaders called on leading AI firms to pause development of their systems for six months in order to work on ways to mitigate risks. “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the open letter from the Future of Life Institute stated. “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” Additional reporting from agencies Read More What is superintelligence? How AI could replace humans as the dominant lifeform on Earth Major breakthrough is a reminder that AI can keep us alive, not just wipe us out Scientists use AI to find new antibiotic against deadly hospital superbug ChatGPT creator signs up for eyeball-scanning cryptocurrency
Humans risk extinction from AI, Deepmind and OpenAI warn

The heads of two of the leading AI firms have once again warned of the existential threat posed by advanced artificial intelligence.

DeepMind and OpenAI chief executives Demis Hassabis and Sam Altman pledged their support to a short statement published by the Centre for AI Safety, which claimed that regulators and lawmakers should take the “severe risks” more seriously.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

The Centre for AI Safety is a San Francisco-based non-profit which aims “to reduce societal-scale risks from AI”, claiming that the use of AI in warfare could be “extremely harmful” as it could be used to develop new chemical weapons and enhance aerial combat.

Signatories of the short statement, which did not clarify what they think may become extinct, also included business and academic leaders in the space.

Among them were Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”, and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI.

The list also included dozens of senior bosses at companies like Google, the co-founder of Skype, and the founders of AI company Anthropic.

AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want.

Experts say the technology could take over jobs from humans – but this statement warns of an even deeper concern.

The emergence of tools like ChatGPT and Dall-E have resurfaced fears that AI could one day wipe out humanity if it passes human intelligence.

Earlier this year, tech leaders called on leading AI firms to pause development of their systems for six months in order to work on ways to mitigate risks.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the open letter from the Future of Life Institute stated.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Additional reporting from agencies

Read More

What is superintelligence? How AI could replace humans as the dominant lifeform on Earth

Major breakthrough is a reminder that AI can keep us alive, not just wipe us out

Scientists use AI to find new antibiotic against deadly hospital superbug

ChatGPT creator signs up for eyeball-scanning cryptocurrency