OpenAI hiccups show disagreement about the future of AI

Date:

The chaos surrounding the dismissal and reinstatement of Sam Altman as head of ChatGPT developer OpenAI is the result of the tug-of-war between two AI movements. On the one hand, there are those who, like Altman, want to promote the development of artificial intelligence (AI) and make the new versions available to the public. Only in this way can this technology be adequately tested and perfected.

On the other hand, there are those who argue for thorough laboratory testing before unleashing these programs on humanity. This is especially true for so-called generative AI, which can create texts, images or videos based on just a few keywords.

“Is this just another product like social media or cryptocurrencies,” asks Connor Leahy, head of AI company ConjectureAI and advocate of a cautious approach. “Or is it a technology that has the ability to surpass humans and become uncontrollable?”

Reports that the Altman row was preceded by an internal letter highlighting a potentially dangerous new development underscore these concerns.

Super AI may not be controllable
Ilya Sutskever, lead developer of OpenAI and, as a board member, partly responsible for Altman’s expulsion at the end of last week, is also concerned. He is critical of Altman’s strategy to integrate AI into as many applications as possible.

“We have no means to direct or control a potentially super-intelligent AI. Humans will not be able to reliably monitor AI systems that are much smarter than us,” he wrote in a blog post last summer.

The developers conference raised questions
Apparently, Sutskever was particularly concerned about the presentation of some new products at OpenAI’s first developer conference a few weeks ago. Programs based on the latest ChatGPT version should function, among other things, as virtual assistants.

However, he is not happy with his association with Altman. “I regret my involvement in the actions of the board,” Sutskever said on Monday. OpenAI’s lead developer could not be reached for comment on this matter.

Almost all staff threatened to quit
The expulsion of Altman, considered the face of the AI ​​industry, sparked open rebellion among the workforce. Early this week, almost all of OpenAI’s approximately 700 employees threatened to resign if Altman did not return to his position. They also called for the resignation of all board members.

Generative AI supports people in their work so far, for example by summarizing long-winded texts. However, some experts warn that these programs could evolve into “artificial general intelligence” (AGI), which could take over increasingly complex tasks without human intervention.

They fear that the software will then control defense systems, spread political propaganda or produce weapons.

Established as a non-profit organization
OpenAI, based in San Francisco, launched eight years ago as a nonprofit. This was intended, among other things, to prevent profiteering from paving the way for dangerous AGI, which, according to its founding statute, “threatens humanity or represents an improper concentration of power.”

However, Altman has since set up a for-profit OpenAI division that markets ChatGPT. This helped him raise the billions needed to further develop the technology from investors such as the software company Microsoft.

The startup’s co-founder co-signed an open letter in the spring warning against the extinction of humanity due to AI. Experts had previously called for a moratorium on the further development of this technology in another call.

Source: Krone

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related