Blog - 28/08/2024
Intellectual Property
Introducing the EU AI Act and can we ignore it in the UK? Part 1 – An introduction
In this series of two legal updates, we look at the EU Artificial Intelligence Act (“AI Act”) and the position for businesses based in the UK. In this first update we introduce the EU AI Act and set out its main provisions. In Part 2 of this series, we examine the impact on businesses based in the UK and whether they can afford to ignore it.
The long-awaited AI Act officially came into force on 1 August 2024 and will regulate the use of Artificial Intelligence (“AI”) in the EU. This landmark piece of legislation, which was first proposed by the European Commission back in April 2021, represents a comprehensive regulatory framework aimed at governing the development, deployment, and use of AI technologies within the EU. As AI continues to rapidly evolve and integrate into various sectors, the AI Act seeks to ensure that these advancements are aligned with European fundamental rights and values, prioritising safety, transparency, and privacy.
Background
It has long been recognised that AI has the potential to drive innovation, enhance productivity, and address societal challenges in fields such as healthcare, transportation, and education. However, alongside these opportunities come significant risks, including potential harm to individuals’ rights, biased decision-making, and threats to privacy and security. These concerns have driven the EU to take a proactive approach in crafting a regulatory framework that balances innovation with protection.
The AI Act is considered to be the world’s first comprehensive horizontal legal framework for AI. Its aim is to ensure the safe, ethical, and trustworthy development, deployment, and use of AI systems within the EU. It builds on the groundwork laid by previous EU digital legislation, including the General Data Protection Regulation (GDPR), the Digital Services Act, the Digital Markets Act, the Data Act, and the Cyber Resilience Act. These initiatives underscore the EU’s commitment to creating a digital economy that respects individuals’ rights while fostering technological advancement.
Key Provisions of the AI Act
The AI Act introduces a risk-based approach to AI regulation, categorising AI systems into four distinct risk levels: unacceptable risk, high risk, limited risk, and minimal risk. Each category is subject to different regulatory requirements, reflecting the potential impact of the AI system on individuals and society. This approach ensures that the most potentially harmful AI applications are subject to the strictest regulations, while lower-risk systems face lighter requirements. This nuanced approach is intended to balance the need for oversight with the flexibility necessary for innovation.
- Unacceptable Risk: AI systems deemed to pose an unacceptable risk are prohibited under the AI Act. Examples of prohibited AI systems are outlined in Article 5 of the AI Act and include AI systems that used to evaluate or classify individuals or groups based on social behaviour or personal characteristics, which could cause detrimental or unfavourable treatment of those people, AI systems that use manipulative or deceptive techniques to distort behaviour and impair informed decision-making, and AI systems that assess the risk of individuals committing criminal offences solely based on profiling or personality traits and characteristics.
- High Risk: AI systems classified as high-risk are authorised but are subject to stringent requirements before they can be deployed in the market. Annex III of the AI Act specifically identifies several areas where AI systems are considered high-risk, which include AI systems used in critical infrastructure, education, employment, law enforcement, insurance and judicial decision making. High-risk AI systems must undergo conformity assessments to ensure they meet strict standards for transparency, accuracy and security. Additionally, these systems must provide clear explanations of their decision-making processes and allow for human oversight.
- Limited Risk: AI systems which present limited risks are required to adhere to certain transparency obligations, such as informing users that they are interacting with an AI system. This category includes AI-powered chatbots and virtual assistants.
- Minimal Risk: AI systems which present low or minimal risk such as AI-enabled video games or spam filters, are subject to the least regulatory oversight. These systems can be developed and used without having to comply with specific requirements imposed on higher risk categories, however they are still subject to general obligations, particularly around transparency, and are encouraged to follow best practices voluntarily. These systems will also still be subject to other applicable legislation, such as the GDPR.
The AI Act also provides specific rules for general purpose AI (GPAI) models and lay down more stringent requirements for GPAI models with ‘high-impact capabilities’ that could pose a systemic risk. GPAI models with systemic risk carry transparency requirements, risk assessment and mitigation obligations.
When will it come into force?
The AI Act came into force on 1 August 2024 and will be effective from 2 August 2024. There will a 2-year transition period to allow AI providers, users, and regulators time to adjust to the new requirements, develop necessary compliance mechanisms, and ensure that AI systems meet the standards set by the AI Act, meaning the AI Act will become fully applicable on 2 August 2026.
A phased timeline for implementation has been proposed as follows:
- AI systems which create unacceptable risks will have to be phased out by 2 February 2025.
- Codes of practice will be ready by 2 May 2025.
- GPAI models must comply with the AI Act from 2 August 2025.
- The AI Act will be fully applicable from 2 August 2026, including high-risk systems defined in Annex III.
- Providers of GPAI models that have been placed on the market before 2 August 2025 will need to be compliant with the AI Act by 2 August 2027.
What are the penalties?
The AI Act outlines significant penalties for non-compliance, reflecting the seriousness with which the EU treats the regulation of AI systems. The penalties are structured in a similar way to those under the GDPR, with fines based on the severity of the violation. Here are the main penalties:
- Up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher): This highest tier of fines applies to violations of the most serious provisions of the AI Act. These include non-compliance with the prohibitions on certain AI practices, failure to meet requirements for high-risk AI systems, and failure to comply with obligations related to data governance and transparency.
- Up to €15 million or 3% of the total worldwide annual turnover (whichever is higher): This tier applies to non-compliance with specific obligations for providers, representatives, importers, distributors, deployers, notified bodies, and users.
- Up to €7,500,000 or 1% of the total worldwide annual turnover (whichever is higher): This applies to less severe violations, such as providing incorrect, incomplete, or misleading information to notified bodies, or failing to cooperate with regulatory authorities.
In the case of SMEs, including start-ups, each fine referred to above shall be up to the percentages or amount referred to, but whichever thereof is the lower.
These penalties are intended to ensure that organisations take their obligations under the AI Act seriously and to protect individuals and society from the risks associated with non-compliant AI systems. The severity of the fines reflects the EU’s commitment to enforcing the regulation and maintaining high standards for AI development and deployment.
The AI Act underscores the EU’s commitment to shaping the future of AI in a way that aligns with its core values, setting a benchmark for responsible AI governance worldwide.
The AI Act has, however, faced criticism from various stakeholders. Some argue that the AI Act’s requirements could stifle innovation, particularly for startups and small businesses that may struggle to meet the stringent regulations for high-risk AI systems. Others have raised concerns about the potential for regulatory overlap and the complexity of compliance, which could create administrative burdens.
Additionally, there is ongoing debate about the effectiveness of the risk-based approach, with some experts questioning whether the current categories adequately capture the nuances of AI technologies and their potential risks.
Therefore, despite its ambitious goals, whether the AI Act can deliver all that is promised, remains to be seen.
We will continue to monitor the regulatory landscape for AI in the UK and EU and keep you updated as they progress. Should you require any assistance in relation to this topic or if you have any questions, please contact Selina Clifford or any member of our Intellectual Property Team.
If you aren’t receiving our legal updates directly to your mailbox, please sign up now
Please note that this blog is provided for general information only. It is not intended to amount to advice on which you should rely. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content of this blog.
Edwin Coe LLP is a Limited Liability Partnership, registered in England & Wales (No.OC326366). The Firm is authorised and regulated by the Solicitors Regulation Authority. A list of members of the LLP is available for inspection at our registered office address: 2 Stone Buildings, Lincoln’s Inn, London, WC2A 3TH. “Partner” denotes a member of the LLP or an employee or consultant with the equivalent standing.
Please also see a copy of our terms of use here in respect of our website which apply also to all of our blogs.