d
c

This is the second part of our two part series on the EU AI Act (“AI Act”). In Part 1 we introduced the EU AI Act and set out its main provisions. You can read Part 1 here. In Part 2 we look at who the AI Act applies to and what the impact is likely to be on businesses based in the UK.

Who does the AI Act apply to?

The AI Act applies to a broad range of entities and individuals involved in the development, deployment, and use of artificial intelligence systems within the EU. Specifically, it targets the following groups:

  1. Providers of AI Systems: This includes any organisation or individual that develops, markets, or places AI systems on the EU market. Providers are responsible for ensuring that their AI systems comply with the AI Act’s requirements, including conducting risk assessments, ensuring transparency, and maintaining accountability.
  2. Users of AI Systems: Entities or individuals that deploy AI systems within the EU for their own use or for offering services to others are subject to the Act. This includes businesses, public authorities, and other organisations that use AI in their operations, particularly when using high-risk AI systems.
  3. Importers and Distributors: Companies or individuals that import or distribute AI systems in the EU are also covered by the AI Act. They are required to verify that the AI systems they bring to the market meet the necessary regulatory requirements.
  4. Service Providers: Companies that integrate or customise AI systems for specific applications, such as software vendors or cloud service providers, are also subject to the AI Act. They must ensure that any modifications or integrations do not violate the compliance requirements of the original AI system.
  5. Organisations outside the EU: The AI Act applies to providers and users of AI systems located outside the EU if the AI systems affect people within the EU. This includes scenarios where AI systems are used to offer goods or services within the EU or monitor the behaviour of individuals within the EU. This extraterritorial application ensures that the Act’s protections extend to all individuals within the EU, regardless of where the AI system is developed or operated.

The AI Act’s broad scope is designed to ensure comprehensive oversight of AI technologies and to protect individuals within the EU from potential risks associated with AI, regardless of where the technology originates.

Does the UK need to comply with the AI Act?

The UK does not need to comply with the AI Act directly because the UK is no longer a member of the EU following Brexit. This means that EU regulations, including the AI Act, do not automatically apply within the UK.

The exception to this is applies to UK businesses that operate in the EU. The AI Act has an extraterritorial scope, meaning it applies to any AI systems that are placed on the EU market or affect individuals within the EU, regardless of where the provider or user is located. Therefore, UK companies that want to operate in the EU or sell AI-related products or services within the EU will need to comply with the EU AI Act. This is similar to how companies outside the EU must comply with the GDPR if they process personal data of EU citizens.

What is the UK’s position on AI?

The UK Government set out its approach to the regulation of AI in its March 2023 white paper ‘A pro-innovation approach to AI regulation’ which outlines the Government’s vision for an agile regulatory framework that can adapt to evolving and emerging AI risks.

What is clear from the white paper, is that UK has favoured a flexible, principles-based approach to AI regulation, focusing on five key principles that are intended to be applied across all sectors:

  1. Safety, Security, and Robustness. The white paper emphasises the importance of ensuring that AI technologies are resilient to risks, including cybersecurity threats, and that they operate reliably and consistently in various environments.
  2. Appropriate Transparency and Explainability. This principle highlights the need for AI systems to provide insights into their decision-making processes, which is crucial for building trust among users and ensuring accountability.
  3. Fairness. The white paper stresses the importance of fairness in AI, ensuring that AI systems do not perpetuate or amplify biases and that they are developed and used in a way that is equitable and non-discriminatory.
  4. Accountability and Governance. Clear lines of accountability must be established for AI outcomes. The white paper calls for robust governance structures that ensure AI systems are subject to oversight and that those responsible for deploying AI are held accountable for its impacts.
  5. Contestability and Redress. This principle ensures that people affected by AI decisions have a means to contest those decisions and that there are procedures to rectify any issues.

Rather than implementing a single, overarching regulatory framework for AI, the UK’s white paper proposes that existing sectoral regulators (such as those in finance, healthcare, and transportation) take the lead in applying the above principles within their respective domains. The idea is that this approach would allow for tailored regulations that are sensitive to the specific challenges and opportunities of different sectors.

What is the regulatory framework in the UK?

Following the release of the white paper, the UK Government initiated a consultation process to gather feedback from stakeholders, including industry, academia, and the public. The outcome of that consultation was published in February 2024, and in many ways, it confirmed what the UK Government had set out in its white paper. The consultation revealed strong support for the white paper’s non-statutory approach to regulation based on a cross-sector set of principles. Therefore, it was generally agreed that a flexible, adaptable framework that avoids heavy-handed regulation is the right approach for fostering innovation while managing risks.

In the UK Government’s response to the consultation, several key sectoral and cross-economy regulators were asked to issue their strategy for the regulation of AI in their sector in line with the outlined principles and existing laws. Responses received from these key regulators, which included the Information Commissioner’s Office (“ICO”), Competition and Markets Authority (“CMA”), the Financial Conduct Authority (“FCA”), Office of Gas and Electricity Markets (“OFGEM”), Civil Aviation Authority (“CAA”) and Equality and Human Rights Commission (“EHRC”), were published by the Government on 1 May 2024 and are available to read here.

The strategic approach of each of these regulators is beyond the scope of this article, however, some common themes included the need for further research into consumer use of AI, cross-sector adoption of generative AI technology, and the need to support regulators with sufficient resource, expertise and leadership through a “Central Function”.

What we might expect to see over the next year or so is very little in terms of actual regulation, but further research and collaboration between the Government and regulators, and the publication of both individual sectoral and cross-sectoral AI regulatory guidance. It appears likely that the recently elected Labour Government may choose to take a different approach to regulating AI, opting to push through AI legislation more quickly. Indeed, in the King’s Speech delivered on 17 July 2024, the UK Government said it would seek to establish the appropriate legislation to place requirements on those working to develop “the most powerful artificial intelligence models”. However, there was no announcement of the introduction of an AI bill, and so there has been no firm commitment by the UK Government to legislate on AI.

Conclusion

The UK does not have to follow the EU’s AI Act domestically, however UK businesses that interact with the EU market will need to ensure their AI systems comply with the EU’s requirements if they want to sell or operate within the EU.

Any differences between UK and EU regulations could create challenges for companies operating in both jurisdictions. Whilst the UK Government acknowledges the importance of aligning their regulations with global frameworks to ensure that UK businesses remain competitive in the international market, and that AI technologies developed in the UK can be deployed globally, it remains to be seen how the UK’s “pro-innovation” approach will operate with stricter regulations, such as the EU’s AI Act.

The UK appears to be in no rush to regulate in this area, adopting a cautious, sector-based approach to AI regulation lead by existing regulators. Whilst the UK’s AI framework may be able to adapt more readily to new emerging AI technologies and risk, as the AI market evolves at a rapid pace, the UK needs to be careful not to be left behind. Things could well change under the new Labour Government, but as things stand, the EU leads the way.

We will continue to monitor the regulatory landscape for AI in the UK and EU and keep you updated as they progress. Please check back here for further updates. If you have any questions in relation to this topic, please contact Selina Clifford or any member of our Intellectual Property Team.

 

Please note that this blog is provided for general information only. It is not intended to amount to advice on which you should rely. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content of this blog.

Edwin Coe LLP is a Limited Liability Partnership, registered in England & Wales (No.OC326366). The Firm is authorised and regulated by the Solicitors Regulation Authority. A list of members of the LLP is available for inspection at our registered office address: 2 Stone Buildings, Lincoln’s Inn, London, WC2A 3TH. “Partner” denotes a member of the LLP or an employee or consultant with the equivalent standing.

Please also see a copy of our terms of use here in respect of our website which apply also to all of our blogs.

Latest Blogs See All

Share by: