As artificial intelligence (AI) develops rapidly, it has captured the attention of both the Republican and Democratic conventions, bringing a once science-fiction concern into mainstream discourse. Paul Solman of PBS NewsHour explores the debate: Is AI an existential threat, or are these fears exaggerated?
Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute, warns of a future where AI could end humanity. "If you keep on making AI smarter and smarter, they will kill you," he asserts. This concern echoes that of Geoffrey Hinton, a pioneer in AI, who compares the potential threat of AI to global nuclear war.
The idea of machines overtaking humans has been a theme in literature and film since the early 20th century, notably in the 1921 play "R.U.R." that introduced the word "robot." Films like "The Terminator" have vividly depicted AI turning against its creators, fueling public anxiety.
However, some experts argue these fears are overblown. Jerry Kaplan, an AI expert at Stanford, dismisses the notion that AI has any inherent desires that could lead it to rebel. "There’s no they there. They don’t want anything. They don’t need anything," he says. Kaplan believes the real danger lies in how humans use AI, potentially creating destructive technologies like autonomous weapons.
Despite these reassurances, Yudkowsky remains concerned. He fears that as AI becomes more intelligent and less controlled, humanity could become "collateral damage." The competition among AI companies, he argues, only heightens this risk.
Even AI systems like Amica, a robot powered by ChatGPT, acknowledge the potential dangers. When asked to rate the likelihood of AI causing humanity’s destruction, Amica gave it a 3 out of 10, emphasizing the importance of vigilance.
Sam Altman, CEO of OpenAI, offers a more optimistic view. He believes AI can be a force for good, capable of both creation and destruction, depending on how it is used. Altman stresses the need to mitigate risks while maximizing benefits.
Reid Hoffman, founder of LinkedIn, shares this sentiment. He suggests that AI could help address other existential threats like pandemics or climate change. Hoffman argues that AI might even lower the overall risk to humanity by offering solutions to pressing global problems.
As AI continues to evolve, the debate over its potential to either save or destroy humanity remains unresolved. Experts are divided, with some warning of catastrophic risks and others highlighting AI’s promise to enhance human life. The world can only hope that the optimists are right.
Keywords