Case Law Update
In November 2023, the High Court handed down a ruling allowing patenting of artificial intelligence in the UK for the first time.
Emotional Perception Ltd developed an AI application that recommended music to end users through use of a trained Artificial Neural Network (ANN). The ANN chooses music for users based on human perceptions and descriptions, and uses machine learning to develop suggestions without human input. The Application can recommend music files based on human perception and emotion, regardless of genre. The categorisations are carried out by the ANN. [For more information on Artificial Neural Networks click here]. Emotional Perception filed a patent with the UK Intellectual Property Office for the application (UKIPO) which included the ANN that was relied upon to deliver the music recommendations.
UKIPO however refused to grant the patent application. The UKIPO determined that the AI used in the ANN meant the application had to fail under the Patent Act 1977 which excludes from patentability a “program for a computer”. Emotional Perception AI Limited appealed the decision to the High Court. The High Court determined that an ANN is not itself a ‘program for a computer’ where the ANN is using machine learning to make recommendations, as in this case, and is “in substance, operating at a different level (albeit metaphorically) from the underlying software on the computer” . Sir Anthony Mann went on to hold that the initial coding used to train the ANN could be a “program for a computer” but that the ANN itself fell outside the exclusion set out in s1(2)C of the Patents Act and was not therefore a “program for a computer”.
As a result of the decision, a system for providing recommendations to end users based on an ANN “not implementing code given to it by a human” but instead making recommendations on the basis of the “application of technical criteria which the system has worked out for itself “ is capable of being patented.
Following this decision UKIPO stated it is “making an immediate change to practice for the examination of ANN’s for excluded subject matter. Patent Examiners should not object to inventions involving an ANN under the “program for a computer” exclusion of section 1(2)(c)”
This landmark judgement appears to indicate movements towards the UK embracing AI inventions as being patentable. This will likely make the UK an enticing jurisdiction for AI development and a leading contributor for this newly developing area of law.
The impact this decision will have depends on the outcome of any appeal that the UKIPO may bring and the UKIPO has indicated its intention to appeal against the decision of Sir Anthony Mann.
In December 2023, the Supreme Court ruled artificial intelligence cannot be named an inventor of a patent under current legislation, holding that the Patents Act 1977 (the Patents Act) requires a “natural person” to be the inventor of an invention.
Dr Thaler, an AI researcher, filed two patent applications with the Comptroller. The patent applications were made on behalf of a machine Dr Thaler owned called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) powered by AI.
The Comptroller decided in 2019 that the DABUS machine could not be an inventor under The Patents Act and Dr Thaler was not entitled to apply for a patent as owner of DABUS. Dr Thaler continued to appeal this decision, losing at every stage, before appealing to the UK Supreme Court.
Supreme Court ruling
The Supreme Court dismissed the appeal. The Supreme Court did not concern itself with broader questions of patentability of technology created by AI, but only with how to interpret the relevant provisions of the Patents Act. In its ruling the Supreme Court answered three questions it set itself below.
Could DABUS be deemed to be an inventor?
The Supreme Court confirmed the Patents Act (sections 7 and 13) only allows for an inventor to be a natural person. As DABUS is not a natural person it cannot be deemed to be an inventor for the purposes of the Patents Act. The Supreme Court held that Comptroller was right to decide that DABUS is not the inventor of any new product or process described in the applications and could not be the “inventor” for the purposes of section 7 or section 13 of the Patents Act.
Was Dr Thaler as the owner of any invention in any technical advance made by DABUS entitled to apply for and obtain a patent in respect of it?
The Supreme Court held “Section 7 does not confer on any person a right to obtain a patent for any new product or process created or generated autonomously by a machine, such as DABUS, let alone a person who claims that right purely on the basis of ownership of the machine.”
The Supreme Court went onto uphold the ruling of Elisabeth Laing LJ who said, at para 103 of the judgment of the Court of Appeal: “Whether or not thinking machines were capable of devising inventions in 1977, it is clear to me that that Parliament did not have them in mind when enacting this scheme. If patents are to be granted in respect of inventions made by machines, the 1977 Act will have to be amended.”
On this basis Dr Thaler’s submissions were rejected, and he was found not to be entitled under the Patents Act to apply for or to obtain a patent in respect of any invention of DABUS.
Was the Hearing Officer entitled to hold that the applications would be taken to be withdrawn?
In light of the above findings the Supreme Court went onto hold that the Comptroller was entitled to find the applications would be taken to be withdrawn when the sixteen-month period allowed for under the Patents Act had expired. As Dr Thalus had failed to identify any natural person who could be an inventor of the inventions described in the applications, in breach of section 13(2) of the Patents Act, and Dr Thalus’s ownership of DABUS itself did not allow for the grant of the patents applied for, the Supreme Court upheld the original Court of Appeal ruling and found the applications to have been so withdrawn.
The decision was expected and is consistent with previous decisions and wider jurisdictional approaches (only South Africa’s intellectual property officers allow AI to be named as patent inventor), but the Court did not consider patentability of technology generated by machines powered by AI in general (just it must be a natural person and not simply the owner of the machine). This will likely be an ongoing matter to be considered as these advances become more frequent.
This does not stop the patentability of a technology generated by an AI machine, simply that the machine cannot be the inventor, as the Patents Act requires an inventor to be a natural person. The mere fact of AI involvement in the development of a technology capable of becoming the subject matter of a patent application, should not by itself negatively affect the granting of a patent.
YouTube relies upon being able to place adverts next to videos on its platform in order to drive revenue. It also offers a premium service to users whereby they can experience YouTube free of any advertising. But some (many?) users routinely use advertising blocking software (AdBlock) to prevent adverts appearing when they use YouTube. To combat this YouTube use their own AdBlock detection system. If a user is found to have AdBlock software enabled, then YouTube, using its AdBlock detection system, may prevent those European users from viewing its content.
Privacy campaigner Alexander Hanff, an expert advisor to the European Data Protection Board, has alleged that YouTube’s Adblock detection is in breach of the EU privacy law. Hanff has filed a complaint with the EU’s independent data regulator, accusing YouTube of failing to get explicit user permission for its ad blocker detection system and as required under Directive 2002/58/EC (Regulation on Privacy and Electronic Communications) (ePR).
German Pirate Party MEP Patrick Breyer formally addressed Hanff’s request for a breakdown of the legal stance on this issue. More specifically as to whether the “protection of information stored on the device (Article 5(3) ePR) also cover (sic) information as to whether the user’s device hides or blocks certain page elements, or whether ad-blocking software is used on the device” and—critically—if this kind of detection is “absolutely necessary to provide a service such as YouTube.”
Breyer agreed with Hanff’s comments and stated “ad blockers protect us from illegal tracking of our online life and online harms. YouTube’s terms and conditions likely violate EU law. YouTube should offer surveillance-free advertising and stop its anti-adblock campaign now.”
The question facing the Irish Data Protection Commissioner (who will be conducting the investigation) is to determine if the AdBlock java detection scripts operated by YouTube are equivalent to downloading and storing a cookie. If so the relevant provisions of the ePR will apply and they will need users’ consent before using the AdBlock software. But YouTube, by all appearances, is ready to contest this. It has stated that it only identifies if Ads have been “served”, not whether or not they have been “played.” When pushed, according to Wired Magazine, YouTube stated it was using AdBlock detection within the YouTube platform, but not on users’ devices. The Irish Data Protection Commissioner’s office has confirmed it is investigating the case.
Depending on any ruling made by the Irish DPC, Data protection controllers will have to decide whether YouTube’s AdBlock detection is as invasive as downloading and storing a cookie, and if so, requires the same level of consumer consent.
Following lengthy discussions on several AI elements including predictive policing, facial recognition, and the use of AI by law enforcement, members of the European Parliament, EU Council and experts from the European Commission have reached provisional agreement on the Artificial Intelligence Act (AI Act). Europe is trying to become a leader in the AI field by encouraging innovation whilst balancing fundamental rights, transparency, democracy, safety, sustainability, and the rule of law. As stated by Thierry Breton, Commissioner for the Internal Market, “The EU becomes the very first continent to set clear rules for the use of AI. The AI Act is much more than a rulebook — it’s a launchpad for EU startups and researchers to lead the global race for trustworthy AI”. The agreed text is yet to be formalised. It is awaiting formal adoption by both the European Parliament and the Council of Ministers before it can become EU law. This may still be a lengthy process!
AI has over the last year been used to provoke any number of doomsday scenarios in the popular imagination. It is not surprising then that the EU has taken a risk-based approach to AI systems, which reflects in part the public anxiety over the risks presented by AI. The AI Act proposes that all AI applications are designated as being one of four risk profiles, each risk profile carrying with it a different regulatory burden: unacceptable risk, high risk, limited risk and minimal or no risk. Each risk level is summarised below:
Certain applications were agreed to be banned as they are “a threat to people” according to the EU Parliament. These are classified as “Unacceptable” uses of AI and they include:
- AI systems that manipulate human behaviour or can be used to exploit people’s vulnerabilities;
- Social scoring (using a person’s characteristics as a metric for classification);
- Biometric categorisation (using special category personal data); and
- Facial recognition databases and remote biometric identification (RBI) systems that blindly scrape data from the internet or CCTV (there were narrow exceptions for the use of RBI data for law enforcement purposes in public places).
AI systems will be assessed before being put on the market and if they are considered to have a negative effect on safety or fundamental human rights, they will be deemed to be high-risk and subject to additional and continued technology and transparency requirements. High Risk AI applications include AI applications related to education, transport, employment and welfare, and the administration of justice, among others. Before putting a high-risk AI system into service in the EU, the legal entity deploying that application must conduct a “conformity assessment” to ensure that the application meets a long list of requirements to make sure it is safe. High risk AI applications are divided into two categories: Products that use AI systems and fall under the EU’s product safety legislation or EU registered Databases that AI systems will fall into (infrastructure management, education, employment, law enforcement, migration, and legal interpretation).
Limited risk AI Applications include AI systems such as deep fakes and chatbots which may have wider possible uses. A limited risk AI application is one that is based on the user being told they are interacting with an AI system and being given the choice to continue or not. The main focus of the AI Act on this Limited Risk category AI applications will be transparency obligations allowing users to make an informed decision.
Minimal or no risk
The focus of the AI Act will be on High risk and Limited risk AI systems. Any AI system that does not fall into these categories will not be subject to any additional compliance burdens under the AI Act.
Business compliance obligations
Companies with High risk and/or Limited risk AI products or services will be subject to transparency requirements and conformity assessment procedures. This will be an ongoing exercise. The EU hopes that such assessments will become as routine to companies in the same way that companies have altered their behaviours in order to comply with the GDPR.
General Purpose and Generative AI (GPAI) systems will also have to follow transparency requirements. These will include disclosure that the content was AI generated and disclosure of the model it was based on, to prevent it from creating illegal content and publishing summaries of copyrighted data used for training.
Failure to follow the rules will have potentially expensive consequences. Under the AI Act companies may be fined for breaching the AI Act up to thirty-five million euros or 7% of global revenue.
Europe is yet to publish a consolidated text and is still to agree and evaluate the wording of the AI Act. Currently, the proposed AI Act is simply a political agreement on aspects the regulation should consider and prohibit. However, in the global race to set up a regulatory framework to encourage AI development, the EU and to a lesser extent the UK, have hit the ground running.
Companies will need to consider the applicable risk profile that will apply to any products or services they deploy within the EU that incorporate an AI element. Depending on the categorisation adopted, companies will need to prepare for the incoming mandatory technical requirements. At the very least companies will have to have in place systems to record properly the associated risks, offer up greater data governance as needed and have in place systems which can satisfy the increased transparency and record-keeping requirements.
In November 2023, the House of Lords completed its first reading of the Artificial Intelligence (Regulation) Bill (“the Bill”). The Bill looks to make provision for the regulation of AI and for connected purposes. Specifically, the Bill sets out the basis for an AI Authority with several functions and the regulatory principles that the AI Authority must consider when regulating AI.
The primary role of the new AI Authority will be to ensure that the relevant regulators take account of artificial intelligence to ensure alignment across sectors with respect to AI. The responsibilities of the AI Authority will include coordinating a review of relevant legislation, including product safety, privacy, and consumer protection, and to assess its suitability to address the challenges and opportunities presented by AI.
The Bill still has a long way to progress, but it reflects the principles set out in the Government white paper in 2023, which has a less interventionist approach to AI compared to the approach set out in the EU’s AI Act. So, for example the Bill envisages that any imposed restrictions or burdens relating to AI should be proportionate to the benefits, and should take into account factors like service, product nature, risk levels, implementation costs, and the UK’s global competitiveness before being implemented.
This is quite a different approach from that being pursued by the EU. As to which approach is more effective, only time will tell. It is difficult for the law to play a pro-active role in bringing about a vibrant AI economy. The law has an important role to play in setting out the framework within which AI can develop. How to balance the risks presented by AI, perceived and real, against the need to encourage, foster and stimulate AI without hampering innovation, remains one of the biggest challenges for the political class today. I hope members will be forthcoming and engage with regulators and politicians alike on what an AI regulatory regime should look like.
 See paragraph 56 in the original judgment Emotional Perception AI Ltd v Comptroller-General of Patents, Designs and Trade Marks  EWHC 2948 (Ch) (21 November 2023) (bailii.org)
 See paragraph 54 in the original judgment Emotional Perception AI Ltd v Comptroller-General of Patents, Designs and Trade Marks  EWHC 2948 (Ch) (21 November 2023) (bailii.org)
 See paragraph 76 in the original judgment Emotional Perception AI Ltd v Comptroller-General of Patents, Designs and Trade Marks  EWHC 2948 (Ch) (21 November 2023) (bailii.org)
 Law 360 by Alex Baldwin – ‘Breaking: AI Cannot Be Patent Inventor, Top UK Court Rules’