In a significant development, Members of the European Parliament (MEPs) have endorsed new rules on Artificial Intelligence (AI), marking a crucial step towards regulating the ethical and human-centric development of AI technologies.
The draft negotiating mandate, approved by the Internal Market Committee and the Civil Liberties Committee, sets forth comprehensive guidelines to ensure the transparency, safety, and non-discriminatory use of AI systems. If finalized, these regulations will become the first-ever global rules on AI.
Risk-based Approach to AI and Prohibited Practices
The proposed rules adopt a risk-based approach, outlining obligations for both AI system providers and users based on the level of risk associated with the technology. The regulations strictly prohibit AI systems that pose an unacceptable risk to people’s safety.
Such prohibited practices include deploying subliminal or manipulative techniques, exploiting vulnerabilities, social scoring based on personal characteristics, and intrusive or discriminatory use of AI systems.
Expanded Bans and High-risk AI Areas
MEPs introduced significant amendments to the regulations, expanding the list of banned AI practices. This includes bans on “real-time” and “post” remote biometric identification systems, biometric categorization systems based on sensitive characteristics, predictive policing systems, emotion recognition systems in various domains, and indiscriminate scraping of biometric data for facial recognition databases.
The classification of high-risk AI areas was broadened to encompass potential harm to health, safety, fundamental rights, the environment, as well as AI systems influencing voters and recommender systems on social media platforms.
Transparency Measures for General-Purpose AI
The regulations include obligations for providers of foundation models, such as GPT, a prominent example of generative foundation models in AI. These providers must protect fundamental rights, health and safety, the environment, democracy, and the rule of law.
Transparency requirements are specific to generative foundation models were also introduced, including disclosure of AI-generated content, prevention of illegal content generation, and publication of summaries of copyrighted data used for training.
Balancing Innovation and Citizen Protection
To promote AI innovation, exemptions were added to the rules for research activities and AI components provided under open-source licenses. The regulations also encourage the establishment of regulatory sandboxes, and controlled environments for testing AI technologies before deployment.
Furthermore, MEPs aim to strengthen citizens’ rights by enabling complaints about AI systems and providing explanations for decisions made by high-risk AI systems that significantly impact individuals’ rights. The role of the EU AI Office will be reformed to ensure effective monitoring and implementation of the AI rulebook.
Key Quotes
Co-rapporteur Brando Benifei (S&D, Italy) emphasized the importance of building trust in AI development, setting the European standard, and driving the global AI debate. Co-rapporteur Dragos Tudorache (Renew, Romania) highlighted the groundbreaking nature of the AI Act, asserting that it positions the EU as a leader in creating human-centric, trustworthy, and safe AI regulations, while simultaneously fostering innovation and protecting fundamental rights.
Conclusion
The approval of the AI Act by MEPs signifies a significant milestone in regulating AI technologies. These regulations aim to ensure a human-centric approach, transparency, and accountability in the development and deployment of AI systems. The draft negotiating mandate will now proceed to the Parliament for endorsement before entering into negotiations with the Council for the final form of the law.