FAST Legal Update Member Bulletin – May 2025

In this bulletin, we will explore the latest legislative updates in the UK in the form of the Artificial Intelligence (Regulation) Bill, and we will provide a progress update on the Data Use and Access Bill which we covered in our last edition. We will also explore the recent case management decision in the ongoing and hotly anticipated Getty v Stability AI case, various Codes of Practice published by the UK Government and we will explain the rationale behind a major regulatory fine made recently made by the ICO. Finally, we consider a legal row between the UK Government and Apple concerning information access under the Investigatory Powers Act.

Legislative Updates

Artificial Intelligence (Regulation) Bill

Private Members’ Bill

Post-Brexit, the EU and UK charted remarkably distinct courses when it came to AI regulation. The EU has adopted a stricter regime, applying uniform rules across all member states with strict compliance obligations. In contrast, the UK has followed its own non-statutory principles-based approach, choosing to rely on existing frameworks and a more flexible focus on sector specific guidance.

Amid this regulatory divergence, Lord Holmes has introduced the Artificial Intelligence (Regulation) Bill (the AI (Regulation) Bill) in the House of Lords to replace fragmented guidance in the UK with a clear legislative framework.[1] The AI (Regulation) Bill proposes the creation of a new AI authority to oversee AI regulation and monitor AI development and adopts a human focused approach as it seeks implement ethical guidelines to prevent discrimination, bias and other harmful effects while stressing the importance of public engagement and transparency. The Bill also seeks to ensure regulators from different sectors of the economy are aligned, and that any regulatory gaps are identified. The first reading of the AI (Regulation) Bill took place in the House of Lords on 4 March 2025, with the second reading taking place on the 22 March.

The Bill is a private members bill, so is unlikely to become law, but it is noteworthy as it reflects the likely trend towards tighter regulation on AI in the UK. Businesses will need to assess their compliance obligations under the new framework proposed and prepare for the increased transparency and accountability such measures may bring. We can expect to see a large amount of development regarding how AI regulation develops over the next year.

Government-backed legislation on the horizon?

Separately, an AI bill that was expected to have been published by the government has been delayed. The government was expected to introduce new laws to regulate AI in the UK shortly after being elected in July, but it has now been pushed back by a further six months. The delay is thought to have been caused by ministers reassessing their approach following developments in the United States. In particular, a proposed bill may have required large language models to be tested by the UK’s safety institute, a position which is not aligned with the US’s de-regulatory stance to AI.

The UK Government seems to be interested in aligning AI policies with the United States so as not to dissuade AI investment from American technology companies. JD Vance discouraged governments from putting in place barriers that would deter “innovators from taking the risks necessary to advance”, and the UK has already shown the potential for an alliance in approach to the US by also refusing to sign the Paris Summit declaration earlier this month. The UK Government cited concerns regarding national security and “global governance” as the reason it could not add its name to the declaration, as it sought to provide evidence that it was not being led by the Trump administration.

AI Opportunities Action Plan

In the context of stilted progress on the AI regulation, the UK Government have still sought to prove that they are a leader in AI innovation. On 13 January, the government announced its AI Opportunities Action Plan, which outlines the UK’s strategy to enhance its position in the global AI market. The Action Plan is split into 3 sections, to reflect the government’s 3 key goals:

  1. Lay the foundations to enable AI;
  2. Change lives by embracing AI; and
  3. Secure our future with homegrown AI.

The recommendations in each section are based on a set of core principles:

  • Be on the side of innovators: all elements of the Action Plan should benefit people and organisations trying to innovate and organisations creating new ideas.
  • Invest in becoming a great customer: government purchasing power can significantly improve public services, shape new markets in AI, and boost the domestic ecosystem.
  • Crowd in capital and talent: the UK needs to attract talent and investors from around the world to start or scale up companies in the UK, as well as AI infrastructure.
  • Build on UK strengths and catalytic emerging areas: with a focus on companies in the AI application and integration space and emerging areas of research and engineering, such as AI for science and robotics.

By focusing on these core principles, the government seeks to “unleash AI’s potential to drive growth, accelerate scientific discovery and tackle important, real-world problems[2] in the AI Opportunities Action Plan. It is undeniable that the UK government has set out their stall they are committed to advancing AI, but in a responsible and regulated manner.

Data Use and Access Bill

The Data (Use and Access) Bill (the DUA Bill) was passed by the House of Lords (HoLs) (with amendments) and sent to the House of Commons on5 February. The Public Bill Committee have completed their work and have reported on the DUA Bill amendments, as of 12 March 2025, the DUA Bill is now approaching the report stage. Amendments can be made to the DUA Bill at the report stage and will be considered are selected by the Speaker.

On 11 March 2025, the Public Bill Committee of the House of Commons voted to remove clauses 135 to 139 of the DUA Bill, regarding compliance with UK copyright law by operators of web crawlers and general-purpose AI (GPAI) models and transparency, which had been added to the Bill in the House of Lords.

The part of the removed clauses that related to GPAI were intended to require operators to disclose information regarding the text and data used in GPAI model pre-training, training and fine-tuning, including the web addresses accessed and information that can be used to identify individual works.

Another notable amendment that was made was in relation to the research provisions. Scientific research is a purpose of data processing that is granted various exemptions under the UK GDPR. The definition of scientific research has been amended to introduce a public interest requirement, which will limit how personal data can be processed for research purposes. This would apply to all scientific research activities, and would limit dependence on the research exemption purpose for commercial purposes which may impact reliance on this purpose by AI developers.

The DUA Bill is expected to become law later this year, however future alterations should be anticipated as the DUA Bill has already undergone a significant number of changes so far across its readings in the Commons and Lords. Businesses should prepare for the impending workload that will be required to comply with the DUA Bill, particularly given the reforms will take effect shortly after the DUA Bill becomes law. Organisations will need to review their procedures and introduce suitable training for their employees to ensure compliance, particularly in relation to data processing, privacy notices and data governance; although it should be noted that the aim of the Bill is to ease the compliance burden for organisations. We would recommend a proactive approach, in particular making sure that you keep abreast of developments in the DUA Bill’s passage before it becomes law.

Case Law Update

Getty v Stability AI

The High Court has dismissed the application of Stability AI (Stability) to strike out the two claims brought against them by Getty Images (Getty), stating that the claims needed to be heard at trial.

For those of you eagerly awaiting the high-profile case going to trial and seeking clarity on whether unlicensed third-party data can be used to train AI models, there has been a development in this case as Mrs Justice Smith has recently delivered a reserved judgment. 

Brief background

In January 2023, Getty Images, a media company that stores digital images such as stock images, videos and audio clips, brought a claim against Stability AI, an AI development company which has developed a model called Stable Diffusion which creates synthetic images (images which are computer generated as opposed to being taken by a camera). Getty alleges that Stability AI, without their consent, utilised (or scraped) circa 12 million images and videos from Getty’s website in order to use and facilitate the machine learning of Stable Diffusion.

Reserved judgment

During a case management conference in November, Stability AI applied to exclude Thomas M Barwick Inc (the sixth claimant, who had exclusively licensed works to Getty Images) from acting as a single representative for a class of over 50,000 copyright owners who had also exclusively licensed works to Getty Images, which are now alleged to have been infringed by Stability AI.

Thomas M Barwick claimed to be a representative of those who own copyright in the artistic or film works that are exclusively licensed to the Getty Images (US) Inc and that the copyright in said licence has been infringed by the Defendant, identifying those persons as those who had entered into an exclusive licence for the works and that those works were used to train Stable Diffusion. The High Court decided in favour of Stability AI and rejected Thomas M Barwick’s right to represent the other copyright owners. It was found that the class could not be satisfactorily identified for the following reasons:

  • identification was dependent on whether each owner’s copyright was deemed to have been infringed, and with this being a key factor of the litigation, the members of the class would not be identifiable until the litigation was complete; and
  • even without the requirement for copyright in the works to have been infringed, identification of the class would still be uncertain as it would be very difficult to ascertain which works had been used to train Stable Diffusion.

This procedural judgment serves as a glimpse into the difficulty with which UK courts are trying to balance the freedom of AI developers for the benefit of innovation within the UK with the interests of rights-holders, when these deep machine learning AI tools require such large datasets to be trained. 

It is currently thought that the trial will begin on the 9 June 2025. This judgment could have significant ramifications as it may provide guidance on the issue as to whether using copyrighted works in order to train generative AI tools is considered a breach of intellectual property rights in the UK. As it stands, there is no definitive position as to whether scraping the internet for data to be used to train AI tools will be deemed to be an infringement of copyright and database rights. We will continue to monitor this case and share any developments in future issues.

Horizon Scanning

Looking forwards, we are still awaiting the outcome of the UK Government’s consultation on copyright and AI. Over the course of the year so far, the consultation has received mixed and mostly negative feedback. Two key stakeholders OpenAI and Google have rejected the government’s approach to solve a dispute about artificial intelligence and copyright. OpenAI have called for a broader copyright exemption for AI, rejecting the opt-out model and has said that experience from other regions, including the EU, demonstrated that opt-out models encounter “significant implementation challenges.” Google has said that opting out should not automatically guarantee compensation for rights holders and insisted that AI training must occur on an open web, stating that rights holders already had the ability to exercise “choice and control” by blocking web crawlers from scraping their content.

Meanwhile, 35 performing arts leaders in the UK, including the bosses of the National Theatre, Opera North and the Royal Albert Hall, have signed a statement outlining their concern about the government’s plans to let AI companies use artists’ work without permission. The consultation has also captured the attention of influential public figures in the creative industries such as Sir Elton John and Sir Paul McCartney who fear that the new laws will allow technology to “take over” and shift wealth from the arts to the tech sector.

Codes of Practice

Cyber Governance Code of Practice

In April,the government also released a new Cyber Governance Code of Practice[3], which formalises the government’s expectations as to how organisations are expected to govern cyber risk and build resilience. This code has been tailor made for boards and directors in both the public and private sectors, and while it is not currently being enforced, it has been designed so that proactive implementation of these guidelines will help organisations to meet future regulations with minimal disruption, while improving overall cyber resilience. Unlike more technical frameworks, the Code speaks directly to governance, accountability, and strategic oversight, and should be taken as a call-to-action for senior leaders to start preparing for the UK Cyber Resilience Bill, which we reported on in our last bulletin.

Cybersecurity is being addressed across both sides of the channel, with the EU also taking steps to help protect against cyber-attacks and improve cybersecurity as shown by the introduction of the EU’s Cybersecurity Strategy and Cyber Resilience Act, which aims to build resilience and trust across member states, focusing on critical sectors like transport and health and the EU Cyber Solidarity Act which came into effect on 4 February 2025. This seeks to enhance responses to cybersecurity threats across the EU.

Code of Practice for the Cyber Security of AI

In addition, the Department for Science Innovation and Technology (DSIT) has published a voluntary Code of Practice (CoP) for the cybersecurity of AI. The CoP sets out baseline cybersecurity principles to assist in securing AI systems and the organisation which develop and deploy them.[4]

The CoP will be used to help create a global standard in the European Telecommunication Standards Institute. The Government’s intention with the code of practice is to re-emphasise expectations that software needs to be secure by design, and to provide clarity on what baseline security requirements should be implemented to protect AI systems.

Software provider fined £3m following ransomware attack[5]

On 27 March 2025, the ICO announced that it has fined Advanced Computer Software Group Ltd (ACSG), Advanced Health and Care Ltd (AHC) and its parent company Aston Midco Ltd (Aston) (collectively, Advanced) a total of £3,076,320 for security failings which put the personal data of 79,404 people at risk following a ransomware attack in August 2022.

ACSG is an IT and software service provider that provided services to the NHS as well as other healthcare providers, acting as a data processor. The fine is in response to a ransomware incident where hackers gained access to Advanced’s health and care subsidy through a customer account with no multi-factor authentication (MFA). The incident was widely reported at the time and caused major disruption to NHS and other social care services. The breach itself was a significant one, as the hackers were able to access the phone numbers, medical records and methods of entry to the homes of those receiving care at home.

The ICO found that a lack of appropriate security measures, such as gaps in its deployment of MFA, contributed to the attack. Last year, the ICO had announced its provisional decision[6] to impose a fine of £6.09m, but it has since reduced this as a result of Advanced’s active cooperation with the NCSC, the NCA and the NHS following the incident, and other action taken to mitigate the harm to data subjects.

The ICO warns that the fine is “a stark reminder that organisations risk becoming the next target without robust security measures in place”. Robust security measures include:

  • Comprehensive MFA (or an equivalent measure);
  • Regular vulnerability scanning; and
  • Adequate patch management.

Apple’s encryption row with the UK Government

Apple has decided to discontinue its Advanced Data Protection (ADP) tool for UK customers following a dispute with the UK government over security measures. The tool uses end-to-end encryption, allowing only account holders to view their iCloud-stored items.

In early February, the UK government served a request to Apple under the Investigatory Powers Act demanding access to user data covered by ADP; which Apple is unable to access. Apple has since decided to withdraw ADP in the UK. As of 3pm on 21 February, new users were no longer able to access the ADP tool. In its statement following, Apple stated that the removal of ADP would make users more vulnerable to data breaches and privacy threats.

Privacy campaigners have criticised the UK Government’s actions, saying that it will weaken security without benefiting intelligence operations. Sources say that Apple has appealed to the Investigatory Powers Tribunal to overturn the Home Office’s demand. The Home Office sought for the trial to be held in secret, arguing that the details of the case could damage public interest and national security if they were made public, however in a recent legal ruling, it was ordered that the case would not be held in secret as the dispute was a matter of significant public interest and could have implications for the privacy and security of millions around the world.

Apple isn’t the only company in the UK dealing with the Investigatory Powers Act. Signal has recently declared that if the UK government tries to undermine its encryption, it would completely withdraw from the UK market. This development is significant in highlighting tensions between the UK government’s security policies and the tech industry’s efforts to protect customer privacy through advanced encryption tools.


[1] Artificial Intelligence (Regulation) Bill [HL] – Parliamentary Bills – UK Parliament

[2] Prime Minister sets out blueprint to turbocharge AI – GOV.UK

[3] Cyber Governance Code of Practice – GOV.UK

[4] Code of Practice for the Cyber Security of AI – GOV.UK

[5] Software provider fined £3m following 2022 ransomware attack | ICO

[6] Provisional decision to impose £6m fine on software provider following 2022 ransomware attack that disrupted NHS and social care services | ICO