Last year, Google engineer Blake Lemoine made headlines when he was fired after telling the media that he believed the artificial intelligence (AI) “Lamda” had developed consciousness and was conscious. Although derided at the time, AI systems have developed at a rapid pace since then; ChatGPT in particular has been on everyone’s lips for months. And thus the question that Lemoine raised: how do we recognize when an AI develops consciousness?
Living creature or ‘probability parrot’? Chatbots and voice AI such as Google’s Lamda or ChatGPT are basically systems that work purely on the basis of statistical probability. They are presented with enormous amounts of text, review it and then, depending on the question, generate the statistically most likely string of words to follow the question. The result is stylistically astonishingly high-quality and often quite sensible text structures, but sometimes chilling misinformation is presented in a tone of conviction. But how do we know if the AI is following its programming – or doing more than that?
Source: Krone

I am Wallace Jones, an experienced journalist. I specialize in writing for the world section of Today Times Live. With over a decade of experience, I have developed an eye for detail when it comes to reporting on local and global stories. My passion lies in uncovering the truth through my investigative skills and creating thought-provoking content that resonates with readers worldwide.