Artificial intelligence puts humanity at “risk of extinction” – hundreds of experts have turned to the public with this warning. The bosses of several companies that are globally at the forefront of AI development have also signed. How real is the danger? And what does an AI say about that?
The statement consists of just one sentence: “Reducing the risk of destruction from AI should be a global priority – on par with other risks to society as a whole, such as pandemics and nuclear war.” This call was published by the Center for AI -Safety (Center for AI Safety) in San Francisco, signed by the elite of AI development. They include Sam Altman, CEO of OpenAI, the heads of Google DeepMind and Anthropic, the head of technology at Microsoft, Taiwan’s digital minister Audrey Tang, and numerous AI experts from research and industry.
Many professionals united
The initiators told the New York Times that the warning was deliberately kept short and general in order to gather as many experts as possible behind it. Expert opinions again differ on what danger a specific threat posed by AI poses and what countermeasures should be taken to counter it.
Recent advances in language models – the AI systems used by chatbots such as ChatGPT, developed by OpenAI – have raised fears that artificial intelligence could be used to spread misinformation widely. It also warns that millions of jobs could be lost.
In the “Spiegel” interview, OpenAI boss Sam Altman recently stated that he was “deeply concerned that biological warfare agents could be developed using AI systems.” It is crucial that people maintain control over the technology. Their limits must be determined in a democratic process. “As a company, we also need clarity and should therefore be regulated,” says Altman. However, in view of upcoming regulations in the EU, OpenAI’s CEO considered withdrawing from Europe last week.
What does the AI itself say about this?
And what does artificial intelligence itself say about the potential danger it poses? krone.at asked OpenAI-developed chatbot ChatGPT if it wanted to destroy humanity. The AI assures: it has “no intentions, desires or ability” to harm any form of life. As a language model, it is there to provide information and help with various tasks. “It is important to remember that AI is a human-made and controlled tool whose impact on the world ultimately depends on how it is used by individuals and society,” the chatbot said in an interview (see screenshot).
So far so good. But isn’t that exactly what an AI with malicious intent would say? When confronted, the bot replies that it “understands the concerns”, but that there are stark differences between fictional depictions of destructive AI and the real technology. This is subject to “strict guidelines and security measures that prevent AI systems from causing harm”. However, it is important to remain vigilant and have open discussions about the ethical use and potential risks of AI, ChatGPT’s diplomatic response said.
Yann LeCun, the chief AI scientist at the Zuckerberg group Meta, does not share the concerns of other researchers. LeCun would not sign the call, describing such warnings as “AI doomsm”. One of his arguments is that machines that have been given laws cannot break them (see tweet above).
Source: Krone

I’m Ben Stock, a journalist and author at Today Times Live. I specialize in economic news and have been working in the news industry for over five years. My experience spans from local journalism to international business reporting. In my career I’ve had the opportunity to interview some of the world’s leading economists and financial experts, giving me an insight into global trends that is unique among journalists.