Turing Prize winner Bengio warns against ‘rogue AIs’

Date:

It may take a few more years or even decades before artificial intelligence (AI) becomes ‘superhuman’, so to speak. Computer scientist Yoshua Bengio, often called the ‘Godfather of AI’, was convinced that we need to prepare for this now. He advocates multilateral, independent AI labs that prepare for the possible emergence of “rogue AIs” and combat them if the worst comes to the worst.

It is not necessarily the case that future systems would pose an existential threat to humanity, but “given the current state of technology, a threat to democracy, national security, and loss of control over superhuman AI systems is highly plausible. “It’s not science fiction,” says Bengio, professor at the University of Montreal (Canada) and pioneer in the field of deep learning, that is, training with large amounts of data to solve complex tasks. This makes it all the more important to prepare for potential risks.

There is currently a lot of discussion about deepfakes, which are realistic-looking voices, images or videos created or edited using AI – for example in relation to the manipulation of elections. The Canadian expert is more concerned about dialogue systems used to convince people. “There are already some studies that suggest they perform comparably or better than humans when it comes to changing someone’s mind on a given topic,” Bengio said. The interactions can be designed individually for each person.

Fear of AI weapon systems
The easy availability of knowledge would bring security authorities around the world to a standstill. “The systems could be exploited to design or build all kinds of weapons,” says Bengio, who, along with Geoffrey Hinton and Yann LeCun, was honored in 2018 with the Turing Award, one of the most prestigious prizes in computer science. The spectrum ranges from cyber attacks to biological and chemical weapons. There are concerns that AI companies do not have sufficient security measures in place.

Of course there are also enormous benefits and opportunities through AI, that is not the point at all. “But if this makes it easier for terrorists to do really bad things, then we have to make careful decisions. And it should not be the CEO of a company who makes these decisions,” said the expert, who also gave a lecture yesterday, Tuesday evening, in the context of the celebration of the 20th anniversary of the Faculty of Computer Science and the 50th anniversary of the Faculty of Computer Science. anniversary. from the Department of Computer Science, taught at the University of Vienna.

Warning about the “self-preservation goal”
But what’s most frightening is the potential loss of control over superhuman AI systems that spread beyond the computer they run on, could also learn to manipulate humans, and eventually perhaps even control robots or other industrial devices themselves. If we manage to develop AI systems that are smarter than humans and pursue a “self-preservation goal,” it would be like creating a new species – and that’s not a good idea. At least until we better understand the consequences. No one knows how big the risk is. But that’s part of the problem,” Bengio explains.

How quickly technology will develop in this direction is controversial. Estimates range from three years to decades. The current enormous investments could certainly speed up the process. Ultimately, the only thing that is relevant is “getting the risks under control, taking into account that if things have to be done quickly, it may be too late to create the right legislation and international treaties.”

AI labs must prepare democracies
Bengio sees one way to counter this threat by building a multilateral network of government-funded nonprofit AI labs that prepare for the possible emergence of rogue AI systems. In the worst case, a safe and defensive AI can be used against it. A cooperative group of democracies would have to work together to design this, because the potentially negative effects of AI would know no bounds. Corresponding defense methods and many aspects of research into them should be kept confidential to make it more difficult for “rogue AI” to evade the new defense measures, as he also recommends in an article in the Journal of Democracy. ”.

The biggest impact would be AI regulation, where developers would have to prove that the AI ​​systems are safe and control cannot be lost. “Right now, no one knows how to do it. “But if you mandate something like that, the companies that have the money and the talent would already be doing a lot more research to build safe systems and putting their energy into protecting the public,” he said. Ultimately, a company that produces new medicines must also scientifically prove that their use is safe. However, these efforts do not exist in the AI ​​industry because it is not a priority in the face of intense competition.

Ultimately, there are still numerous challenges in the scientific understanding of AI safety, but also a political responsibility to ensure that the right protocols are followed in building safe AI as quickly as possible. This is the only way to ensure that “no person, no company and no government can abuse this kind of power,” Bengio said.

Source: Krone

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related