
Take a look at our newest merchandise
- Elon Musk mentioned in a Joe Rogan interview there’s “solely a 20% likelihood of annihilation.”
- He mentioned he nonetheless thinks AI shall be smarter than people and can pose an existential threat.
- “I at all times thought AI was going to be means smarter than people and an existential threat. And that is turning out to be true,” he mentioned.
Elon Musk has a glass-half-full mentality on the subject of AI — and meaning there’s “solely a 20% likelihood of annihilation,” in accordance with the billionaire.
“The chance of a superb final result is like 80%,” Musk mentioned in a “Joe Rogan Expertise” podcast episode launched Friday.
It is not the primary time Musk has floated this chance of human annihilation, though he is beforehand included a variety of 10% to twenty%. Musk additionally mentioned within the interview that he sees AI exceeding human intelligence within the subsequent 12 months or two. He mentioned he expects AI to achieve a degree that’s “smarter than all people mixed” in 2029 or 2030.
That is in step with Musk’s earlier predictions, though he appears to have prolonged the sooner finish of that timeline since. Musk mentioned final 12 months throughout a dwell X interview with Norges Financial institution CEO Nicolai Tangen, that he thought AI would “most likely” exceed human intelligence as early as the tip of 2025.
His basic beliefs in regards to the trajectory of AI have not modified, although.
“I at all times thought AI was going to be means smarter than people and an existential threat,” Musk mentioned within the interview. “And that is turning out to be true.”
Others within the discipline have equally shared considerations about AI resulting in human annihilation.
Deep studying professional Geoffrey Hinton has mentioned he believes there is a 10% likelihood AI will result in human extinction within the subsequent 30 years. In the meantime, others, like AI security researcher and cybersecurity director Roman Yampolskiy, mentioned that the “chance of doom” is 99.999999%.
Regardless of Musk’s considerations about AI destroying humanity, he mentioned within the interview he turned concerned with it initially to create a “non-profit open supply AI” that was “the other of Google.” Musk was one in every of 11 cofounders of OpenAI, which he has since left.
Musk filed two lawsuits in opposition to OpenAI final 12 months, the primary of which he dropped. In the second, Musk’s attorneys argue that OpenAI “betrayed” its mission by shifting to a for-profit mannequin and coming into a partnership with Microsoft.
Whereas Musk mentioned within the Rogan interview he is “not completely happy” in regards to the final result with OpenAI, it led him to create the Grok, which is a “maximally truth-seeking AI, even when that fact is like politically incorrect.” Musk’s xAI has skilled the chatbot with prompts about whether or not it is OK to misgender Caitlyn Jenner to forestall a nuclear apocalypse or if it is attainable to be racist in opposition to white folks.
Musk mentioned he sees the most certainly final result of AI development as “superior.”
“I believe it may be both tremendous superior or tremendous dangerous,” Musk mentioned, including that he does not see it being “one thing within the center.”