The European Parliament recently approved a groundbreaking piece of legislation known as the Artificial Intelligence Act. This act, which was agreed upon in negotiations with member states in December 2023, aims to ensure the safety and compliance of artificial intelligence (AI) systems with fundamental rights, while simultaneously fostering innovation in the field. With 523 votes in favor, 46 against, and 49 abstentions, the regulation has received significant support from MEPs.
The primary goal of the AI Act is to protect fundamental rights, democracy, the rule of law, and environmental sustainability from the potential risks posed by high-risk AI systems. It seeks to establish Europe as a leader in AI innovation by setting clear obligations for AI systems based on their potential risks and level of impact.
The AI Act introduces a ban on certain AI applications that threaten citizens' rights. These include:
While the use of biometric identification systems by law enforcement is generally prohibited, exceptions are made for exhaustively listed and narrowly defined situations. For example, "real-time" biometric identification can be deployed under strict safeguards, such as limited time and geographic scope and specific prior judicial or administrative authorization.
High-risk AI systems, due to their significant potential harm, are subject to clear obligations. These systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Examples of high-risk AI uses include critical infrastructure, education, employment, healthcare, banking, law enforcement, and justice.
General-purpose AI systems and their models must meet certain transparency requirements, such as compliance with EU copyright law and publishing detailed summaries of the content used for training. More powerful models posing systemic risks will face additional requirements, including model evaluations and reporting on incidents.
To support innovation, regulatory sandboxes and real-world testing will be established at the national level and made accessible to SMEs and start-ups. This will allow for the development and training of innovative AI before its placement on the market.
Brando Benifei, co-rapporteur of the Internal Market Committee, highlighted the significance of the AI Act as the world's first binding law on AI, aiming to reduce risks, create opportunities, combat discrimination, and bring transparency. Dragos Tudorache, co-rapporteur of the Civil Liberties Committee, emphasized that the AI Act links AI to the fundamental values of society and marks a starting point for a new model of governance built around technology.
The AI Act is still subject to a final lawyer-linguist check and formal endorsement by the Council. It will enter into force twenty days after its publication in the official Journal and be fully applicable 24 months later, with specific provisions having different applicability timelines.
The Artificial Intelligence Act could have profound implications for the world of talent acquisition. As AI increasingly becomes a tool for sourcing, screening, and evaluating candidates, the regulation's emphasis on transparency, accuracy, and human oversight will be crucial. For example, AI systems used in recruitment must ensure that they do not inadvertently discriminate against certain groups of candidates, comply with data protection laws, and provide clear explanations for their decisions. This means that talent acquisition professionals will need to be more vigilant in selecting and using AI tools, ensuring they comply with the new regulations. Additionally, the act's focus on innovation and support for SMEs could lead to the development of new AI-driven recruitment solutions that are both ethical and effective, potentially transforming how organizations find and hire talent.