The Austrian solution aims to prevent AI errors

Date:

In the future, many decisions will probably be made by artificial intelligence (AI). Especially in sensitive areas, guarantees are expected that their selection is meaningful and that serious errors are excluded. A possible solution comes from Austria.

A team of researchers from the Vienna University of Technology (TU) and the Austrian Institute of Technology (AIT) presented a method at a conference in Canada that can be used to verify whether certain neural networks are fair and secure.

Anagha Athavala from the Institute of Logic and Computation at TU Wien analyzes neural networks that assign certain input data to certain categories. The input could be a traffic situation, for example, which a neural network uses to decide whether a self-driving car should steer, brake or accelerate. The input could also be datasets about bank customers, based on which the AI ​​decides whether someone gets a loan.

Robustness and honesty
According to the researchers, two important properties are needed for such a neural network: “Robustness and fairness,” says Athavala. If it is robust, two situations that differ only in minor details should lead to the same result. A neural network is considered “fair” if it produces the same result in two situations that differ only in one value that is not relevant to the decision.

As an example, the computer scientist gives a neural network for assessing creditworthiness and two people with very similar financial data, but different gender or ethnicity: “These are parameters that should not influence the granting of credit. So the system should produce the same result in both cases.” That is not self-evident, as systems trained on distorted data have repeatedly shown in the past.

Techniques for verifying robustness and fairness have so far focused on defining the two properties locally. Given a specific input, they look to see if small differences lead to different results. The goal, however, is to define global properties to “guarantee that a neural network always has these properties, regardless of the input,” Athavala says.

The researchers have succeeded in developing such a system. They presented it this week at the 36th International Conference on Computer Aided Verification in Montreal, Canada. “Our verification tool not only checks the neural network for certain properties, but also provides information about its trust level.” Such a trust-based security property is a major change in the way global properties of neural networks are defined. The method makes it possible to rigorously test a neural network and guarantee certain properties with mathematical reliability.

Source: Krone

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related