Focue Provides the Latest and Most Up-to-Date News, What You Focus On is What You Get.
⎯ 《 Focue • Com 》

ChatGPT boss says he’s created human-level AI, then says he’s ‘just memeing’

2023-09-27 13:45
OpenAI founder Sam Altman, whose company created the viral AI chatbot ChatGPT, announced on Tuesday that his firm had achieved human-level artificial intelligence, before claiming that he was “just memeing”. In a post to the Reddit forum r/singularity, Mr Altman wrote “AGI has been achieved internally”, referring to artificial general intelligence – AI systems that match or exceed human intelligence. His comment came just hours after OpenAI unveiled a major update for ChatGPT that will allow it to “see, hear and speak” to users by processing audio and visual information. Mr Altman then edited his original post to add: “Obviously this is just memeing, y’all have no chill, when AGI is achieved it will not be announced with a Reddit comment.” The r/singularity Reddit forum is dedicated to speculation surrounding the technological singularity, whereby computer intelligence surpasses human intelligence and AI development becomes uncontrollable and irreversible. Oxford University philosopher Nick Bostrom wrote about the hypothetical scenario in his seminal book Superintelligence, in which he outlined the existential risks posed by advanced artificial intelligence. One of Professor Bostrom’s thought experiments involves an out-of-control AGI that destroys humanity despite being designed to pursue seemingly harmless goals. Known as the Paperclip Maximiser, the experiment describes an AI whose only goal is to make as many paperclips as possible. “The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off,” Professor Bostrom wrote. “Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.” Following Mr Altman’s Reddit post, OpenAI researcher Will Depue posted an AI-generated image to X/Twitter with the caption, “Breaking news: OpenAI offices seen overflowing with paperclips!”. OpenAI is one of several firms pursuing AGI, which if deployed in a way that aligns with human interests has the potential to fundamentally change the world in ways that are difficult to predict. In a blog post earlier this year, Mr Altman outlined his vision for an AGI that “benefits all of humanity”, while also warning that mitigating risks poses a major challenge. “If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility,” he wrote. On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.” Read More ChatGPT AI is about to be eclipsed by ‘interactive AI’, DeepMind founder says Iranian officials mulling over use of AI to issue fatwas Spotify clarifies position on whether it will ban AI-powered music ChatGPT now has power to ‘see, hear, and speak’
ChatGPT boss says he’s created human-level AI, then says he’s ‘just memeing’

OpenAI founder Sam Altman, whose company created the viral AI chatbot ChatGPT, announced on Tuesday that his firm had achieved human-level artificial intelligence, before claiming that he was “just memeing”.

In a post to the Reddit forum r/singularity, Mr Altman wrote “AGI has been achieved internally”, referring to artificial general intelligence – AI systems that match or exceed human intelligence.

His comment came just hours after OpenAI unveiled a major update for ChatGPT that will allow it to “see, hear and speak” to users by processing audio and visual information.

Mr Altman then edited his original post to add: “Obviously this is just memeing, y’all have no chill, when AGI is achieved it will not be announced with a Reddit comment.”

The r/singularity Reddit forum is dedicated to speculation surrounding the technological singularity, whereby computer intelligence surpasses human intelligence and AI development becomes uncontrollable and irreversible.

Oxford University philosopher Nick Bostrom wrote about the hypothetical scenario in his seminal book Superintelligence, in which he outlined the existential risks posed by advanced artificial intelligence.

One of Professor Bostrom’s thought experiments involves an out-of-control AGI that destroys humanity despite being designed to pursue seemingly harmless goals.

Known as the Paperclip Maximiser, the experiment describes an AI whose only goal is to make as many paperclips as possible.

“The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off,” Professor Bostrom wrote.

“Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Following Mr Altman’s Reddit post, OpenAI researcher Will Depue posted an AI-generated image to X/Twitter with the caption, “Breaking news: OpenAI offices seen overflowing with paperclips!”.

OpenAI is one of several firms pursuing AGI, which if deployed in a way that aligns with human interests has the potential to fundamentally change the world in ways that are difficult to predict.

In a blog post earlier this year, Mr Altman outlined his vision for an AGI that “benefits all of humanity”, while also warning that mitigating risks poses a major challenge.

“If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility,” he wrote.

On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

Read More

ChatGPT AI is about to be eclipsed by ‘interactive AI’, DeepMind founder says

Iranian officials mulling over use of AI to issue fatwas

Spotify clarifies position on whether it will ban AI-powered music

ChatGPT now has power to ‘see, hear, and speak’

Tags tech