EU Takes the Lead with AI Act: Paving the Way for Responsible Innovation

December 13, 2023
in Blog

In a significant development for the tech world, Members of the European Parliament (MEPs) have reached a political deal with the Council on the groundbreaking Artificial Intelligence Act. 

This agreement aims to strike a delicate balance, ensuring that AI in Europe is safe, respects fundamental rights and democracy, while fostering innovation and positioning Europe as a leader in the AI domain.

As part of its digital strategy, the EU recognizes the transformative potential of AI in sectors such as healthcare, transport, manufacturing, and energy. The AI Act, proposed in April 2021, classifies AI systems based on the risk they pose, laying the foundation for the world's inaugural rules on AI.

The European Parliament prioritizes the safety, transparency, traceability, non-discrimination, and environmental friendliness of AI systems used in the EU. Advocating for human oversight over automation, Parliament aims to establish a technology-neutral, uniform definition for AI applicable to future systems.

The legislation establishes varying obligations for providers and users based on the level of risk associated with AI. Unacceptable risk AI systems, posing a threat to people, will be banned. High-risk AI systems, affecting safety or fundamental rights, will be categorized and subject to assessment before entering the market.

Generative AI, like ChatGPT, is required to adhere to transparency requirements. This includes disclosing AI-generated content, designing models to prevent illegal content generation, and publishing summaries of copyrighted data used for training.

AI systems with limited risk must comply with minimal transparency requirements. Users should be informed when interacting with AI, especially systems that generate or manipulate image, audio, or video content, such as deepfakes.

Banned Applications

The agreement addresses potential threats to citizens' rights and democracy by prohibiting certain applications of AI. These include biometric categorization systems using sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and educational institutions, social scoring based on personal characteristics, and AI systems manipulating human behavior.

Law Enforcement Exemptions

Safeguards and narrow exceptions have been established for the use of biometric identification systems in publicly accessible spaces for law enforcement purposes. Real-time and post-remote biometric identification are subject to strict conditions and limited use, focusing on targeted searches, preventing specific terrorist threats, and localizing individuals suspected of committing specific crimes.

Obligations for High-Risk Systems

Clear obligations have been outlined for high-risk AI systems, including a mandatory fundamental rights impact assessment. This applies to sectors such as insurance and banking, with citizens having the right to launch complaints about AI systems affecting their rights.

Guardrails for General AI Systems

General-purpose AI (GPAI) systems are required to adhere to transparency requirements, including technical documentation and compliance with EU copyright law. High-impact GPAI models with systemic risk face more stringent obligations, including model evaluations, risk assessments, adversarial testing, and reporting on serious incidents and cybersecurity.

Measures to Support Innovation and SMEs

The agreement promotes regulatory sandboxes and real-world testing to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants.

Sanctions and Entry into Force

Non-compliance with the rules can result in fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the infringement and company size.

Following the deal, co-rapporteur Brando Benifei emphasized the significance of the legislation in keeping rights and freedoms at the center of AI development. Co-rapporteur Dragos Tudorache highlighted the EU's pioneering role in setting robust regulations on AI, protecting citizens, SMEs, and fostering innovation.

The agreed text will now undergo formal adoption by both Parliament and Council to become EU law. As Europe takes the lead in regulating AI, the world watches to see how these rules will shape the future of technology, ensuring a human-centric approach to AI development and evolution.

As of June 14, 2023, MEPs have adopted Parliament's negotiating position on the AI Act. Talks will now commence with EU countries in the Council to finalize the law. The goal is to reach an agreement by the end of the year.

About the author George

Marketing Specialist

Check the articles below