The rule was agreed in December 2023 in negotiations with the Member States and has now been endorsed by Parliament with 523 votes in favour, 46 against and 49 abstentions.
It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability against artificial intelligence (AI) which is a "high risk". The law establishes a series of obligations for OA based on potential risks and the degree of incidence.
The new rule prohibits indiscriminate detection of some AI applications that infringe citizens’ rights, such as biometric categorization systems based on sensitive features and surveillance camera recordings to create facial images or databases to know the face of the Internet. The detection of emotions in the workplace and in schools, citizen scoring systems, predictive police activity (when based solely on a person’s profile or the assessment of their characteristics) and AA manipulating human behaviour or exploiting people’s vulnerabilities will also be prohibited.
Security forces are prohibited a priori from using biometric identification systems, except in very specific and well-defined situations. Biometric identification systems can only be used "in real time" if strict protections are observed, for example if their use is limited to a given period and place and has prior judicial or administrative authorisation. These cases may include the selective search for a missing person or the "prevention of a terrorist attack". The ex post facto use of these systems is considered to be a high-risk use and requires judicial authorisation for being subject to a criminal offence.