ChatGPT developer OpenAI is concerned with how humans can control a potential digital “superintelligence”. “People won’t be able to reliably monitor artificial intelligence systems that are much smarter than us,” the San Francisco-based company noted in a blog post Wednesday. An automated procedure must therefore be developed for this.
The aim is to master the central technical challenges for such a system within four years. OpenAI wants to use one fifth of the available computing capacity for this. The development team is to be led by chief scientist Ilya Sutskever. Even if computer “super intelligence” still seems a long way off, OpenAI still believes it’s possible in this decade. It will help solve many problems and be the most memorable invention of mankind.
The move follows a warning from industry experts who believe artificial intelligence could pose an existential threat. One of the signatories is OpenAI boss Sam Altman. The alleged threat was not explained in detail – and critics accused the experts that such warnings distracted from artificial intelligence problems that already exist today, such as discrimination in algorithms.
OpenAI developed, among other things, the ChatGPT chatbot, which sparked a new hype around artificial intelligence. The program can create sentences that are almost indistinguishable from a human’s. It has been trained for this with huge amounts of data and based on that estimates word for word how a sentence should continue. However, this principle also causes ChatGPT to sometimes output completely incorrect information.
Source: Krone

I am an experienced and passionate journalist with a strong track record in news website reporting. I specialize in technology coverage, breaking stories on the latest developments and trends from around the world. Working for Today Times Live has given me the opportunity to write thought-provoking pieces that have caught the attention of many readers.