The new law places great emphasis on transparency. Companies must inform users when they interact with an AI system, whether it is phone calls or chats where chatbots intervene.
As of today, the European Union is in force Artificial Intelligence Act (AI), the world’s first to regulate and promote systems that enable us to act more accurately and efficiently than humans in many areas innovationbut which are important at the same time risks which the new common framework seeks to avoid. The standard aims to promote excellence and technological innovation and the protection of human rights.
These are the five master keys of the new standard:
Goals
The main objectives are to establish a harmonised legal framework in the European Union for the development, marketing, implementation and use of Artificial Intelligence (AI) systems, an area that can bring many benefits but also entails risks. The aim is also to foster innovation and position Europe as a leader in the sector.
Who does it apply to?
The rules apply to providers of AI systems put into service or placed on the market within the EU or the output of which is used in the EU, regardless of their origin. Also to their users, taking into account users operating these systems.
The Regulation does not apply to third-country public authorities or international organisations when they use AI-systems in the field of police or judicial cooperation with the EU, nor to systems for military use or used in the context of national security, nor to systems used for the sole purpose of scientific research and development.
The new law places great emphasis on transparency. Companies must inform users when they interact with an AI system, whether it is phone calls or chats where chatbots intervene.
Types of AI systems
The law provides for four types of AI systems: prohibited, high-risk, with transparency requirements and for general purposes.
The prohibited block includes techniques that use subliminal techniques to distort a person’s behaviour in a way that could cause physical or psychological harm to them or others, biometric categorisation systems, the random capture of facial images from the internet, emotion recognition in the workplace and in schools, systems to ‘score’ people based on their behaviour or characteristics, predictive policing and AI that manipulates human behaviour or exploits people’s vulnerabilities.
However, the regulations allow for exceptions. “Real-time” biometric identification systems can only be used if a series of safeguards are met. These may include selectively searching for a missing person or preventing a terrorist attack.
Fines
The fines will be modulated depending on the circumstances and will assess the size of the supplier. For those who do not comply with the regulations, fines are foreseen ranging from 35 million euros or 7% of the global business volume of the companies to 7.5 million euros or 1.5% of the global business volume of the companies.
Stages in the application of the new law
After its entry into force on 1 August, it will be fully applicable twenty-four months later, with the exception of practice bans, which will apply six months after the date of entry into force, i.e. in February 2025.
In August 2025, the rules for general purpose models such as ChatGPT will come into effect, and a year later, in August 2026, the law will become generally applicable, with some provisions.
Obligations for high-risk systems will come into effect 36 months later, in August 2027.
Source: EITB

I am Mary Fitzgerald, a professional journalist and author of the Today Times Live. My specialty is in writing and reporting on technology-related topics. I have spent the last seven years extensively researching and understanding the field of technology so I can properly inform my readers about developments in this ever-evolving world.