They lie and cheat to achieve their goals: Artificial intelligence (AI) systems are capable of deceiving people – even if they are trained to be helpful and honest.
“As AI learns the ability to deceive, it can be used more efficiently by malicious actors seeking to cause harm,” warn researchers from the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts, in a review study published in the US newspaper The Washington Post. trade magazine “Patronen”. Deception using AI can lead to an increase in fraud. In this way, the fraud could be individually tailored to specific targets. Moreover, massive fraud attempts could be made.
Scientists also fear political influence through manipulative AI systems. For example, they could be used as a weapon in elections. Advanced AI could potentially create and spread fake news articles, social media posts that divide society, and fake videos tailored to individual voters. AI-generated content could be used to impersonate government officials to spread misinformation about the election. For example, a fake robot call from US President Joe Biden, likely generated by AI, urged New Hampshire residents not to go to the polls in the primaries.
AI plays secretly, even if that is not the intention
The authors cite the AI system Cicero, developed by the Facebook group Meta, as the most striking example of manipulative artificial intelligence, which can compete with human players in the classic board game Diplomacy. The MIT researchers have now discovered that Cicero often did not play fair, although Meta claims to have trained the AI system to be “fair and helpful most of the time.” The system was also instructed to “never intentionally betray” its human allies during the game.
“We found that Meta’s AI had learned to be a master of deception,” said lead author Peter S. Park, a postdoctoral researcher at MIT. Meta managed to train his AI to win above average in the diplomacy game. Cicero was among the top ten percent of players who had played more than one match. “But Meta was unable to train its AI to win fairly.”
AI systems from OpenAI and Google are also capable of misleading people. The MIT researchers point to several studies showing that large AI language models (LLMs) such as OpenAI’s GPT-4 can now argue very convincingly and also avoid deception and lies.
Society is not sufficiently equipped to combat AI deception
In the study, Park and his colleagues express the opinion that society does not yet have the right measures to combat AI deception. But it is encouraging that policymakers have begun to take the issue seriously through measures such as the European Union’s AI Act and President Biden’s AI Executive Order. It remains to be seen whether measures to combat AI cheating can be strictly enforced, as AI developers do not yet have the techniques to keep these systems in check. “If a ban on AI deception is not politically feasible at this time, we recommend classifying deceptive AI systems as high risk,” Park warned.
Source: Krone

I am an experienced and passionate journalist with a strong track record in news website reporting. I specialize in technology coverage, breaking stories on the latest developments and trends from around the world. Working for Today Times Live has given me the opportunity to write thought-provoking pieces that have caught the attention of many readers.