The boss of one of the biggest artificial intelligence firms in the world has estimated the chance that his technology could end human civilisation is up to 25 per cent.
Dario Amodei, chief executive of Anthropic AI, said in an interview that a catastrophic end result of advanced AI technology could come from the tech going wrong itself, or humans misusing it.
He said: “My chance that something goes really quite catastrophically wrong on the scale of human civilisation might be somewhere between 10 per cent and 25 per cent.
“Put together the risk of something going wrong with the model itself with something going wrong with people or organisations or nation states misusing the model or it inducing conflict among them.”
Amodei is a co-founder of Anthropic AI and previously worked for OpenAI, the company which developed ChatGPT.
It comes as concerns ramp up across the world about the power of AI, and whether it could eventually lead to catastrophe for humanity.
The release of the most recent version of ChatGPT, which illustrated writing skills which, in some capacities such as legal and technical writing, are comparable to that of a human, but at much higher speeds.
Amodei added: “That means there is a 75 per cent to 90 per cent chance that this technology is developed and everything goes fine.
“In fact if everything goes fine it’ll go not just fine, it’ll go really really great.
“If we can avoid the downsides then this stuff about curing cancer, extending human lifespan, solving problems like mental illness… This all sounds utopian but I don’t think it’s outside the scope of what this technology can do.”
Amodei did not elaborate on his speculation of how AI could “cure” cancer or “solve” mental illness.
A handful of early-stage AI projects have shown promise in early diagnosis of hard-to-detect tumours like some types of lung cancer.
But doctors have cautioned against over-optimism of AI’s ability to curer or detect diseases, pointing out that it could also lead to over-diagnosis, potentially making the process even less efficient, rather than more streamlined.
Meanwhile, earlier this year, hundreds of AI industry leaders signed an open letter calling for more robust regulations of the technology to lessen the risk that it ultimately leads to the extinction of humanity.
The letter, signed by OpenAI founder Sam Altmann and others, said: “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
Sign up to our free Indy100 weekly newsletter
Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.