Artificial Intelligence (AI) is already part of our day to day life and its ability to influence and shape the way we live our lives in the future seems to be endless.

AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet”, Rishi Sunak, UK Prime Minister, speech to the Royal Society, October 2023

News that a new Beatles song will be released using lyrics originally sung by John Lennon using Artificial Intelligence (AI) to separate those original lyrics and to combine them with lyrics and music from other members of the Beatles is just one example of the incredible power of AI to do good and bring enjoyment. However, as the UK government has recently acknowledged, such power and opportunity does not come without risk and it is important that those risks are properly investigated and managed.

“Mitigating the risk of extinction by AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”, Centre for AI Safety, May 2023

“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence”, Rishi Sunak, UK Prime Minister, speech to the Royal Society, October 2023

In this article we will look at what AI is and explain some of the common words and expressions which are commonly associated with AI. We will also look at some of the applications of AI and begin to explore some of the potential ethical and legal issues which we hope to look at in more detail in subsequent publications.


In 1950, Alan Turing penned a renowned article in which he debunked arguments against the feasibility of developing intelligent machines. Some years later, a group of American scientists, hailing from various sectors of industry and academia, authored ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,’ simultaneously coining the phrase “artificial intelligence” (AI). Over the past decade, interest in AI has consistently surged across various metrics.

Each major technological advance brings possible risks, disruptions, and opportunities, and arguably, AI presents the most significant potential. In just two months, ChatGPT garnered over 100 million users, exemplifying AI’s capacity to impact our lives. It hints at a forthcoming transformation in how we access information, obtain services, and prompts various professions to re-evaluate their practices.

What is AI?

Artificial Intelligence or AI is a broad field of computer science that focuses on creating systems and machines capable of performing tasks that typically require human intelligence. These tasks include problem-solving, learning, decision-making, understanding natural language, and recognising patterns. AI encompasses a wide range of techniques and technologies designed to mimic human cognitive functions.

Other words and expressions commonly used in connection with AI include:

Generative AI: Generative AI refers to a subset of artificial intelligence that specialises in generating content, such as text, images, or even entire pieces of media. It employs techniques like neural networks to produce new and original content based on patterns and data it has learned. Generative AI can create realistic text, art, music, and more, often used in creative applications, content generation, and even simulations. It is the developments in Generative AI and the availability of applications of Generative AI such as ChatGPT in the last year or so which have really accelerated the awareness of AI and its potential and risks.

Frontier AI: This is the expression currently being used by the UK government to describe highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced AI models.

Machine Learning (ML): Machine Learning is a subfield of AI that involves developing algorithms and models that enable machines to learn from data and improve their performance over time. It is particularly valuable in tasks where patterns or trends exist within large datasets, enabling machines to make predictions or decisions based on the information they’ve processed.

Reinforcement Learning: Reinforcement Learning is a machine learning approach where an AI agent learns to make sequences of decisions by interacting with an environment. The agent receives rewards or penalties based on its actions, and it aims to maximize its cumulative reward over time. This approach is widely used in tasks like game playing and autonomous control systems.

Natural Language Processing (NLP): Natural Language Processing is a subfield of AI focused on enabling computers to understand, interpret, and generate human language. NLP techniques are used in applications like language translation, sentiment analysis, chatbots, and text generation.

Expert Systems: Expert Systems are AI programs designed to emulate the decision-making abilities of a human expert in a specific domain. They use knowledge-based rules and facts to provide expert-level recommendations or solutions in specialised areas.

ChatGPT: ChatGPT is perhaps the best known application of AI. It is an example of Generative AI, a Natural Language Processing model developed by OpenAI. The name “ChatGPT” stands for “Chat Generative Pre-trained Transformer.” It is part of the GPT (Generative Pre-trained Transformer) series of language models and is specifically designed for natural language understanding and generation, making it well-suited for tasks like chatbot interactions and text generation. It is of course not the only example of its kind and other applications include Microsoft Bing AI, Perplexity AI, Google Bard AI and Chatsonic.

What are the applications of AI?

Where to start? There is barely an industry or sector untouched by AI or which AI has the ability and potential to radically transform. Examples include the following:

Legal – The legal profession is increasingly embracing AI.  One of the most immediate benefits of AI lies in its capacity to enhance the efficiency of legal tasks through automation. Generative AI has the potential to transform the process of generating a wide range of legal documents and information. For example, it can be utilised for legal research and analysis, creating legal documents, or providing general legal guidance.

Healthcare – In the healthcare sector AI perhaps has the most potential to save and transform lives whether that is through disease diagnosis where AI systems can analyse medical data such as X-rays, MRIs, and patient records to assist doctors in diagnosing diseases like cancer and COVID-19 or through drug discovery where AI is used to identify potential drug candidates and predict their effectiveness in treating various diseases.

Finance – AI is commonplace in the world of finance whether through use in algorithmic trading where AI algorithms are employed to facilitate high-frequency trading by making rapid decisions based on market data or sometimes controversially in credit scoring where AI helps financial institutions assess credit risk and determine loan eligibility for customers.

Retail – Anyone who has browsed an e-commerce website or done any online shopping will have experienced AI first hand. Recommendation systems use AI to provide personalised product recommendations to customers based on their browsing and purchase history. Behind the scenes AI is used in inventory management to optimize inventory levels, helping retailers avoid overstocking or understocking.

Education – In the education sector there is the potential for AI to drive personalised learning by tailoring educational content to individual student needs, helping students learn at their own pace. The education sector has however also experienced some of the more challenging effects of AI with the use of AI by students to generate work. This in turn has led to the development of plagiarism detection applications which utilise AI tools to help educators identify instances of plagiarism in student submissions.

Energy – AI has huge applications in the energy sector particularly when supporting the green agenda. Notably it is used for energy management to optimize energy consumption in smart grids, reducing waste and costs and in renewable energy forecasting to predict renewable energy generation, and helping integrate renewable sources of energy like wind and solar into the power grid.

Ethical concerns

With the great opportunities and potential offered by AI come an almost equal number of ethical concerns. Many of these flow from the inherent inaccuracies and biases that might be present in the information which the AI engine is fed to learn and others relate to more of a cultural concern about machines ultimately taking over the jobs of humans and/or falling into the wrong hands. We summarise some of the main concerns below.

Bias and Fairness: AI systems can inherit biases from their training data, leading to discrimination in areas like hiring, lending, and law enforcement. Ensuring fairness and reducing bias in AI algorithms is a significant ethical challenge.

AI in Criminal Justice: The use of AI in predictive policing and sentencing can perpetuate existing biases. Ethical concerns include the fairness of AI-driven decisions and their impact on marginalised communities.

Social Manipulation: AI, particularly in the form of deepfake technology, can be used for social manipulation and misinformation, raising concerns about the authenticity of information and media.

Accountability: Determining who is responsible for the actions of AI systems is complex. When AI makes a mistake or causes harm, it can be unclear who should be held accountable.

Job Displacement: Automation driven by AI can lead to job displacement. Preparing for the economic and social impacts of automation and providing support for affected workers is an ethical concern.

Security: AI systems can be vulnerable to attacks and misuse. Ensuring the security of AI applications to prevent malicious use is an ethical imperative.

Privacy: AI systems often process vast amounts of personal data, raising concerns about data privacy. Protecting individuals’ data and ensuring it is used responsibly is a critical ethical consideration.

Legal Concerns

As well as ethical concerns the increasing use of AI brings with it a myriad of legal issues and concerns. We shall examine these in more detail in future articles but a brief summary of some of the main issues is set out below.

Many of the legal issues relate to the way in which AI engines learn and are developed. They will typically require a huge amount of data to learn and improve their performance and there are some serious concerns about how this data can be sensibly used without infringing or breaching the rights of others including:

Intellectual Property Infringement – The use of content from the internet or from other sources without permission is likely to be an infringement of the intellectual property in those third party sources. This in turn will present real problems for users of, for example Generative AI instances where work which is generated by the Generative AI system may itself infringe the rights of third parties if permission was not obtained originally.

Data Privacy – The content that AI systems use to learn and improve their performance will often also contain information about real life individuals.  This may include for example sensitive or special category data such as health or medical information or information about religious beliefs or sexual preferences. One of the main principles in the General Data Protection regulation (GDPR) is that personal data must be processed lawfully, fairly and transparently. In the context of AI systems, it can be challenging to see how these basic principles can be complied with.

Contractual issues – Many websites including content providers and streaming sites will now have terms of use which prohibit the use of their content for commercial purposes. Some go further and expressly prohibit the use of their content for the purposes of teaching or developing AI engines. While there are questions around whether these terms of use are properly incorporated into the contract between the provider and the AI bot including whether the bot itself can enter into a contract on the face of it such terms of use make learning and development by AI systems in this way problematic.

Other issues arise  when the AI system is used in practice. These include:

Accuracy – An obstacle in the adoption of Generative AI is ensuring its reliability in producing accurate results. Since Generative AI is trained on vast text data, it may not consistently offer up-to-date or relevant information, potentially leading to inaccuracies or misinterpretations. This may even extend to so called “AI Hallucinations” where the AI engine essentially invents answers (or at least extrapolates answers from the data that it has). In the legal sector this happened recently in a US case where two U.S. attorneys were sanctioned for using Generative AI to draft legal briefs. This resulted in fabricated quotes and citations supporting their arguments. This underscores the potential for serious consequences in a legal context[1].

Liability – Lawyers have long been debating who is liable where an AI engine gets it wrong. Should that always be the creator of the AI system or does that position differ where there is no defect in the original programming but perhaps the AI engine has itself developed in an unexpected way? The answer to that is ultimately likely to be increased regulation and law making.

Ownership of IP – When AI creates intellectual property who owns it? Is this the original programmer of the AI or the AI engine itself or can there be a form of joint ownership? Does this depend on where in the world the IP is created or where registration is sought? Does it vary according to different IP rights?

Discrimination – As noted above the use of AI can lead to discrimination and it is important that these systems are designed to avoid bias. That is likely to be a function of both how the AI is originally programmed and how it learns. It is therefore likely to require regular monitoring and cross checking to ensure that no bias has crept in.

Governance and Regulation

It is common ground amongst most commentators that AI requires a legal framework in order to regulate its use and development. One of the key issues here is that with AI developing so quickly it is difficult for regulators to keep up. This is therefore a very fast evolving picture, but we have set out below some of the principle initiatives, and regulation that are currently underway.

UK White Paper – In March 2023, the UK government introduced a white paper outlining its “pro-innovation approach” to AI regulation. This approach aims to encourage AI innovation while establishing a regulatory framework focused on five key principles: safety, security, and resilience; transparency and explainability; fairness; accountability and governance; and contestability and redress. While not planning to make this framework a statutory requirement, the government emphasises the need to monitor and manage risks while promoting AI innovation.

Global AI Safety Summit – In October 2023, the UK government published a discussion paper on the “Capabilities and risks from frontier AI” and has recently called for the establishment of a “…truly global expert panel nominated by the countries and organizations attending to publish a state of AI science report.” Following this, in early November 2023, the UK Prime Minister initiated the world’s first AI Safety Institute in the UK, tasked with assessing the safety of emerging forms of AI. At this time, 29 nations, including the U.S., the UK, the EU, and China signed an international declaration that recognised the need to address risks represented by AI development. This declaration was named the “Bletchley Declaration”.

The European Union’s Artificial Intelligence Act – On Friday, December 8, 2023, the European Parliament and Council reached political agreement on the European Union’s Artificial Intelligence Act (“EU AI Act”), after months of negotiations. With this landmark piece of legislation, the EU seeks to create a far-reaching and comprehensive legal framework for the regulation of AI systems across the EU, aiming to balance innovation with ethical considerations. At its core, the EU AI Act will adopt a risk-based approach, defining four different risk classes, each of which covering different use cases of AI systems. The EU AI Act is expected to be officially published in the EU’s Official Journal in the next few months and the majority of its provisions will then come into force 2 years after that.

G7 International Guiding Principles on Artificial Intelligence and voluntary Code of Conduct for AI developers – On 30 October 2023, the G7 leaders agreed on Guiding Principles and a Code of Conduct. This effort will complement the binding regulations made by the EU co-legislators through the EU AI Act. The G7 has endorsed 11 Guiding Principles, offering guidance to organisations involved in developing, deploying, and utilising advanced AI systems like foundation models and generative AI. The aim is to enhance the safety and trustworthiness of this technology.

US Executive Order on Safe, Secure, and Trustworthy AI – The White House released an Executive Order shortly before the UK hosted the AI Summit, outlining potential shifts in the regulation and policy framework for Artificial Intelligence (AI) in the United States. The Executive Order emphasises the goal of safeguarding diverse groups, including consumers, patients, students, workers, and children.


AI holds an incredible amount of potential as a valuable tool offering a wide array of potential applications. Nevertheless, it is crucial to recognise the challenges and ethical aspects of the increased use and integration of AI.  It is also an area where there is an absence of specific regulation and without that, there is a considerable challenge in applying existing laws.

Prior to embracing this technology, it is imperative to strike a balance between the potential advantages and the associated risks and drawbacks. Much of this depends on the responsible and transparent use of AI in ensuring that the undoubted benefits are not outweighed by the risks and negatives.

[1] www.theguardian.com/technology/2023/jun/23)Two US lawyers fined for submitting fake court citations from ChatGPT | ChatGPT | The Guardian

Should you require any advice concerning the impact AI may have in your future, or if you have any questions regarding the matters raised in this blog, please contact Partner Nick Phillips or any member of our Intellectual Property team.

Please note that this blog is provided for general information only. It is not intended to amount to advice on which you should rely. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content of this blog.

Edwin Coe LLP is a Limited Liability Partnership, registered in England & Wales (No.OC326366). The Firm is authorised and regulated by the Solicitors Regulation Authority. A list of members of the LLP is available for inspection at our registered office address: 2 Stone Buildings, Lincoln’s Inn, London, WC2A 3TH. “Partner” denotes a member of the LLP or an employee or consultant with the equivalent standing.

Please also see a copy of our terms of use here in respect of our website which apply also to all of our blogs.

Latest Blogs See All

Share by: