FAST Legal Update Member Bulletin – January 2025

In this newsletter, we explore some recent developments in the CJEU, with the EU court ruling on copyright protection for software variables. We also consider legislative updates, with a progress update on the Cyber Security and Resilience Bill, and a new bill of interest in the form of the Data Use and Access Bill. This issue also covers a range of updates and action in the artificial intelligence space, including the launch of a consultation on the application of UK copyright law to the training of AI models, and new initiatives seeking to ensure the safe use of AI. Finally, we turn to recent cyber incidents of note; while there are always too many to cover in one issue, we shall focus on those particularly relevant to you.

Legislative Updates

Cyber Security and Resilience Bill

Background

As covered in the previous issue, the Cyber Security and Resilience Bill was announced on 17 July 2024. The intention of the Bill is to strengthen the UK’s defences and ensure that the most essential digital services are secure and protected, following a notable increase in sophisticated attacks on the UK’s digital economy.

The Bill broadens the remit of UK regulations, with the aim of offering more protection to digital services and supply chains. In particular, UK regulators will have increased powers to ensure suitable measures are being implemented and to investigate potential vulnerabilities, while organisations must comply with reporting requirements that apply to a wider breadth of incidents.

The Bill was followed by an announcement that UK data centres will be designated as Critical National Infrastructure (CNI) in September. This is the first CNI designation in almost a decade, allowing the government to support data centres closely in the event of critical incidents and in anticipation of them. The CNI designation includes a dedicated team of government officials monitoring and anticipating potential threats, greater access to the National Cyber Security Centre, and strengthened contingencies in the event of an attack on a data centre of significant importance to the UK.

Timeline set

It has now been confirmed by the Department for Science, Innovation and Technology (DSIT) that the Cyber Security and Resilience Bill will be introduced in Parliament in 2025. The Bill has already gathered a lot of interest, especially given the increased burden for organisations that will arise from mandatory incident reporting, and speculation that the Bill will include a reporting obligation for ransomware incidents. DSIT have advised that they will engage with stakeholders again in due course; which is likely to follow the Bill being tabled in the first session of the new Parliament. As a note, DSIT have already run a consultation on the Bill which closed on 21 November 2024.

Organisations should, of course, keep abreast of developments closely over the course of 2025 given the limited information we have to date. Compliance may, in a similar manner to other regulatory changes, create a substantial time burden on organisations; and as such making early arrangements for a compliance program will be beneficial when more details have been published.

Data Use and Access Bill

The Data Use and Access (DUA) Bill is the latest proposal for reform to data regulation in the UK and the most significant legislative development of recent months. The Bill is a third attempt by successive governments to try to reform the UK’s data protection framework post Brexit. It covers similar areas to the Data Protection and Digital Information Bill (proposed by the previous government), with some of the more controversial offerings removed.

The Bill is currently making its way through the legislative process at a fast pace; it was introduced to the House of Lords on the 23rd October, it had its second reading on the 19th November, and at the time of writing, it is in the committee stage in the Lords.

The Bill is divided into two key parts. There is an aspect looking at data use in the economy, and another aspect focussing on traditional regulation. The former looks to encourage smart data usage, through mechanisms such as digital identity verification services and a new smart data regime which allows data to be shared with authorised third party providers (ATPs).

Key changes

Despite some initial criticisms that the Bill doesn’t do enough to regulate in the AI space, it is worth noting that it is not entirely silent on AI. Data controllers will be interested in the proposed changes to automated decision-making rules. The key headline is that the changes will make it easier for organisations to use automated decision-making, particularly in lower risk situations.

The changes also provide additional protections for data subjects when the decision making involves special category personal data. In these situations, controllers will not only be required to give the individual information about the decision, but also to provide the individual with the opportunity to: (a) make representations, (b) request human intervention, and (c) contest the decision. The approach in the Bill signifies a departure from the EU GDPR, so in this respect, it might create challenges for organisations with an international workforce or client base where different procedures and processes will be followed in different markets.

One area which does reduce regulatory burden is in the space of legitimate interests. The Bill introduces a list of recognised legitimate interests in what is a more business friendly approach, essentially removing the burden of carrying out the detailed assessment and balancing test for specified purposes; for example, if a business is processing data in an emergency or to safeguard vulnerable individuals. In these cases, there will be a presumption of legitimate interest. The effect of this is that it will reduce the regulatory burden, and data can be shared more quickly and efficiently when necessary.

The Information Commissioner’s Office shall also be abolished under the Bill, as well as the statutory role of the Information Commissioner, with both being replaced by the Information Commission. There are also changes proposed to its governance structure, objectives and duties, powers of enforcement, reporting requirements, complaints process and statutory codes of practice. Additionally, the Bill will provide more flexibility for data processing for scientific research.

There has also been a change to the fee regime where electronic marketing rules are broken. This can lead to fines which match that of the UK GDPR (£17.5 million or 4% of annual global turnover, instead of the current maximum fine of £500,000). As this has been an area where the ICO have been interventionist in recent times, this will be an area of interest for businesses.

Analysis

In areas of compliance where organisations carry a large burden and may have hoped there would be changes (for example, with data subject access requests and data protection impact assessments), any changes have not come to pass. Some of the earlier proposed changes to data protection law have also not appeared in this version of the Bill; potentially due to fears that pushing through more ambitious reform could cause a significant divergence from the EU’s approach, and affect the UK’s data adequacy decision with the EU. This was seen as a risk after the EU Parliament expressed concerns regarding the previously proposed Data Protection and Digital Protection Bill. Removal of the UK’s data adequacy decision would create significant compliance hurdles for organisations when transferring data to and from the EU, so less ambitious reform is positive in this respect.

It seems that the Labour Government is particularly keen to progress with the Bill, laying out quite an ambitious timetable to get the law onto the statute books. This will be welcomed by organisations – there has been uncertainty over the last couple of years as a result of repeated failed legislative reforms in the data protection space. This has resulted in organisations deciding not to make significant changes to their data compliance regime, due to a lack of certainty and fears of duplicating efforts when new legislation passes through. Getting a bill onto the statute books in quick time will allay fears of recurring failed reform attempts, and therefore will be helpful in giving organisations more certainty again.

Otherwise, progress of the DUA Bill should be tracked for any changes in its scope and legal requirements as this passes through the Lords. Businesses will need to prepare early, and future-proof their data practices.

Case Law Update

Sony Computer Entertainment Europe Ltd v Datel Design and Development Ltd (C-159/23)[1]

In a very interesting case, more clarity has been provided by the European courts on the copyright protection for computer games. On 17 October 2024, the Court of Justice of the European Union (the CJEU) ruled that rights holders cannot prevent others from selling software that only makes temporary changes to variables stored in the RAM of a game console.

Brief background

Sony Computer Entertainment Europe Ltd (‘Sony’), creator of the PlayStationPortable (the PSP, a well-known handheld game console) claimed that products developed, and distributed by Datel altered Sony’s software in a manner which infringed its copyright. Following multiple appeals, it was referred up to the CJEU. The question for the CJEU was whether Datel’s software infringed Sony’s exclusive right to carry out or authorise alterations to its computer programs.

This action was brought as Sony is the exclusive licensee for PlayStation game consoles and the games that are used on these consoles. Prior to 2014, the PSP was marketed by Sony. Datel develops products that complement Sony’s game consoles, such as the Action Replay PSP software and the Tilt FX device. When plugged into the PSP via a USB stick, these products allow users to access additional game options and features which remove specific software restrictions and would normally be unavailable to regular users of the PSP. For example, the Action Replay software could unlock, at the start of many different video games, several characters that Sony intended to be accessible at a later stage, while the Tilt FX device provided users with a sensor that, when connected to the PSP, enabled motion control.

The judgment

Settled case law provides that computer programs are protected by copyright as literary works, to the extent that they are the author’s intellectual creation. The CJEU considered two things: (1) the effect of Datel’s products on the PSP software’s source code and object code; and (2) the functionality and use of features of computer programs.

The source code and object code (the Code) of the PSP software was considered to be a form of expression of the computer program, because the Code allows the program to be reproduced or subsequently recreated. As such, the Code benefitted from the protection granted by Directive 2009/24/EC (the Directive), which protects against “any other alteration” of computer programs. However, there is no protection within the Directive for the functionalities and means by which PSP users make use of the features of the computer program.

Applying this to Sony’s claim, Datel’s software ran alongside PSP’s software and did not alter the Code. Instead, Datel’s software simply changed the ‘content of the variables’ that are temporarily transferred from a Sony game to the RAM of the PSP. Therefore, Datel’s products only change the content of the variables transferred from the PSP’s protected software to its RAM and do not reproduce the protected software.

Key takeaways

Despite the CJEU’s ruling, the legal protection under the Directive available to rights holders will depend on the particular facts of each case. However, third-party software accessory providers will be relieved by the CJEU’s ruling that, as long as the ‘cheating software’ (as described by the CJEU’s Advocate General) is limited to variables stored in the computer’s RAM and does not change the underlying object or source code, or allow for the reproduction and subsequent creation of the copyright-protected software, there is no copyright infringement.

The judgment deals a major blow to Sony and other video game console providers, as there is no recourse to copyright infringement if it cannot be proven that the third-party software changes the underlying object or source code. It is not enough that the third-party software allows computer programs to be used differently from how they were originally designed to be used. The ‘expression’ of a program is protected under copyright law, but the underlying principles and ideas are not. The modification of temporary variables stored in the RAM of a computer were temporary and passing, triggered by the user’s interaction with the game, and did not affect the protected source and object code and did not allow for reproduction. As such, there was no infringement in this case.

Providers of other software products should be aware of this judgment and the potential for wider ramifications to emerge as a result. There may be potential for other third-party software accessory providers to create products which change the use of original software, in sectors other than video games, with less fear of legal action by large software providers. However, each case would still depend on the facts and there are other avenues for software providers to seek legal protections (e.g. copyright protection for a user interface).

It should be noted that following Brexit, UK courts do have the power to make a judicial departure from the CJEU. Additionally, CJEU case law only has the status of guidance to UK courts, instead of binding them. While we have not seen many instances of departures from the EU approach to intellectual property law yet, if the UK courts saw a sufficient reason to diverge from the EU’s approach, this is possible.

Copyright and AI: consultation launched

On 17 December 2024, the UK Government launched an open consultation on the application of UK copyright law to the training of AI models. The consultation acknowledges the current tension between the creative industries, who wish to control the use of their works and be suitably renumerated, and AI developers, who struggle to navigate UK copyright law. The UK government’s view is that this legal tension is both undermining AI adoption in the UK and slowing down innovation.

As context, there has been a growing debate around the scope of the UK’s Text and Data Mining (TDM) exception under copyright law. As it currently stands, the TDM exception allows copyrighted content to be reproduced only for the purpose of non-commercial research, where there is “lawful access” to the work. This is narrower and more restrictive than the TDM exceptions (or equivalent) in other jurisdictions – as such, AI developers who want less restrictions on their access to training data have been calling for an “any purpose” TDM exception; as supported by the previous government. However, this plan was deserted following feedback from the creative industries, and discussions regarding a voluntary code of conduct never produced any positive output.

Under the new government and before the 17 December publication, there were indications that change was afoot, in particular to allow text and data mining to be conducted as part of commercial activities (with some safeguards), which would align the UK with the EU approach. This received negative pushback from the chair of the Culture Media Sport Committee (Conservative MP Dame Caroline Dinenage) who suggested that a better approach would be to require AI developers to license copyrighted content, which would in turn incentivise creators.

Proposed approach

As a potential compromise, the UK Government have proposed changes which would allow rights holders to reserve their rights (“opt-out”) before having the option to license them, while broadening the TDM exception to allow AI developers to use a greater range of material in training their models. The amended exception would apply for any purpose (including commercial purposes), but only where the user has lawful access to the works and where the rights holder has not reserved their rights. The burden of opting out would be on the rights holder, which will be controversial and likely to be subject to feedback from the creative industries given the administrative burden this will bring.

The proposal would allow AI developers to lawfully use more material to train their models in the UK, which the government sees as a preventative step to stop AI developers choosing to train their models in other, less restrictive jurisdictions.

The consultation emphasises that this approach would be underpinned by greater transparency from AI developers as to how material is acquired and used by their models, and that legislation to ensure transparency could be coming soon. For example, the EU’s approach in the EU AI Act (where a “sufficiently detailed summary” of training content is to be provided by developers) is noted in the consultation. Following this, the Government proposes specific transparency obligations which seek to let rights holders understand if their works have been used to train AI models, and if so, to what extent.

The consultation also touches upon how the Government could seek to manage the variations between models which have been trained in different jurisdictions, and therefore suitable methods to ensure compliance with UK law on the training of AI models. The consultation also considers clarifications and input on the following subjects:

  • Encouraging research and innovation (including by research organisations);
  • The temporary copies exception (for copies made during technological processes);
  • Use of AI in education;
  • Labelling of AI outputs; and
  • Control over digital replicas (deepfakes)

Interestingly, the consultation also notes that infringing outputs may not be an issue, with the Government assessing that many AI models do not include the creative expression of copyrighted works in their output. This is despite ongoing litigation in the case of Getty Images v Stability AI, which shall be discussed in a future issue once a decision has been made. The Government’s view is that the copyright framework on infringing outputs is “reasonably clear and… adequate”. The government also believes that greater control as to the transparency of AI models will reduce the number of infringing outputs.

Next steps

The consultation sheds some light as to how a domestic copyright framework for AI model training will look, as well as signalling the government’s clear intent to strike a compromise between the interests of the creative industries and AI developers to bring the UK closer in line with the EU’s approach.

The deadline for responses to the consultation is 25 February 2025. Given the number of organisations potentially affected, the consultation is likely to gather a lot of interest, and as such, a significant degree of engagement with stakeholders which could cause a time delay as these responses are considered. The Government will also consider drafting legislation, in particular to encourage AI developers to be more transparent in sharing information as to how their models gather and use information.

As well as the time burden of considering engagement and drafting legislation, the Government will need to give more thought as to the technical ability for rights holders to exercise their ability to “opt out” from text and data mining for commercial purposes. The control that rights holders seek must be granular as many rights holders want their works to be searchable, but not used for training AI models – which has been discussed and found to be problematic within the EU. The government accepts in the consultation that there is a lack of standardisation of controls for an opt-out system in this context. Additionally, the available technical standards tend not to provide the granular control required for an effective opt-out system. As such, the UK will need to factor current technical limitations into any final proposals.

From Innovation to Regulation: UK and AI governance

AI governance remains a hot topic for organisations across the world which are coming to terms with the risks present in using AI. With the DUA Bill largely quiet on AI regulation as detailed above, the UK Government have been keen to push through some developments to promote AI innovation while implementing measures which support AI safety initiatives.

This is particularly in light of the Labour Government’s comment in the King’s Speech, stating they would regulate the “most powerful AI models”, as well as the recently published ‘Assuring a responsible future for AI’ report released on the 6th November[2]. This report highlights that the AI assurance market will continue to grow, with the potential to exceed £6.53bn by 2035[3]. The report suggests a range of developments which will promote AI assurance, such as through further research and developing tools for organisations to use.

Ensuring trust in AI

One such initiative is DSIT’s AI Assurance Platform (the Platform). The Platform will give businesses a single platform to locate guidance and practical resources which will help them to identify and mitigate potential risks and harms posed by AI.

By way of example, DSIT has suggested that resources would cover how businesses can conduct AI impact assessments and evaluations, and how to review data used in AI systems to check for bias. Resources would include existing guidance from DSIT, as well as new developments like the AI Management Essentials (AIME) tool which will help organisations implement responsible AI management practices and decision-making. A consultation is currently taking place on the development of the AIME tool, running until 29 January 2025[4].

As part of the announcement, DSIT also confirmed that the UK AI Safety Institute, a government body dedicated to AI safety, had signed a new agreement with its Singapore counterpart to formalise the collaboration between the states on AI safety. Between the two bodies, work will be conducted to create a shared set of policies, standards and guidance for AI safety in the UK and Singapore. We may see similar agreements signed with countries heavily involved in AI development alongside the UK going forwards.

Regulatory Innovation Office

In October, the Government also launched a new ‘Regulatory Innovation Office’ (the RIO)[5] to create a coordinated approach to regulation, including in the AI space. The RIO will support regulators to update regulation, speed up approvals, and shall create opportunities for regulators to work together. While the RIO has been set up to focus on a range of fast-growing areas of technology, this does include AI within healthcare as well as autonomous technologies (like drones).

More details as to how the RIO will function and specific focuses will be released in due course.

What’s next?

Looking forwards, with a crescendo of calls for the UK Government to implement AI-specific legislation and regulation, it looks increasingly likely that we can expect AI legislation to be introduced next year, especially in light of the King’s Speech statement which signalled the potential for upcoming AI regulation. The UK is seeking to be a global leader in AI safety and we should expect further developments which aim to reaffirm this position.

UK collaboration on cybersecurity

As technology advances, organisations are proactively seeking to improve their protection against future cyberattacks. Responding to attacks in a timely fashion is useful, but having in place the systems and structure to prevent attacks is critical. The UK Information Commissioner’s Office (the ICO) and the UK National Crime Agency (the NCA) are determined to advance their aim to help UK organisations improve their protection, either through direct engagement or providing guidance for organisations to protect themselves.

The ICO’s and NCA’s collaboration

Following an announcement on 10 September 2024 by the ICO, there is set to be further collaboration on cyber security between the ICO and the NCA. A Memorandum of Understanding (MoU) was signed between both bodies, setting out and formalising how they will work together to create a more secure, resilient cyber ecosystem.

The MoU is a framework for cooperation and information sharing, documenting how the ICO and the NCA will work together in the following areas: (1) improving cybersecurity for regulated organisations; (2) information sharing for organisations that have engaged the ICO or NCA and are responding to cyber threats and incidents; and (3) incident management between the ICO and NCA when, for example, an organisation reports an incident to the ICO and the report is relevant to, or should have been made to, the NCA.

The ICO will collaborate with the NCA to develop its cyber security standards and guidance for the benefit of the organisations they regulate. Additionally, the bodies have agreed not to share information between themselves regarding the organisations they are engaging with, unless:

  • it is permitted by law;
  • the ICO and NCA obtain consent from the organisations; or
  • it is appropriate in light of the facts of each incident.

In any case, any information shared will be anonymised. Given the nature of cyber attacks and the urgent need to respond quickly, a streamlined system for information sharing between the ICO and NCA is a positive development in allowing organisations to respond to attacks efficiently and effectively.

Going forward

The MoU is a show of commitment to cybersecurity and creating a safer digital environment. The arrangements promote learning and the provision of consistent guidance, and offers a new avenue for organisations to provide feedback to the agencies. It fosters an effective working relationship and, because the MoU is to be reviewed every two years, there is scope for consistent improvement of this relationship.

The government is seeking to put its house in order so that organisations can have a better and more candid line of communication with the ICO and NCA on cybersecurity. While this will not immediately create a fully integrated and streamlined process when engaging the bodies following a cyber incident, this is a positive step to promote efficient preventative and restorative action following cyber attacks.

Recent cyber incidents: lessons in resilience

Whilst the Government takes steps towards boosting cybersecurity, this quarter we continue to see cyber attacks, and as such, reminders for companies and governmental bodies to create robust protections and forms of resilience to a range of threats. As described below, we are increasingly seeing data being sold on the dark web and new sites and platforms being used to facilitate these transfers. 

Major Financial Software Company Suffers Data Breach

Finastra, a fintech company providing software solutions to the majority of the world’s leading banks, is currently investigating the scale of a data breach detected on 7 November 2024 after the company’s Security Operations Centre discovered interference with its Security File Transfer Platform (SFTP), which was used to securely send files to customers. The stolen data was then put up for sale on BreachForums, an underground site regularly used for passing on information which has been obtained through cybercrime.

Although the extent of the breach is still being actively investigated, it has been reported that 400GB of data was stolen. It is thought that the stolen data comprised of client data, including transaction history and financial records, as well as confidential internal documents such as employee credentials and Finastra’s private policies and processes.

Finastra has confirmed that the breach was not the subject of a malware or ransomware attack, but instead the result of stolen login credentials (i.e. an unwarranted individual simply using somebody else’s username and password to access internal systems). Single-factor authentication (basic username and password combinations) proved to be a key contributor to the ease with which the hacker was able to access the SFTP.

The breach therefore serves as a useful reminder for companies to ensure that, alongside putting in place (and following) internal processes and governance around login credentials and password sharing, they have considered the possibility of introduction of two-factor or multi-factor authentication.

Blue Yonder Suffers Ransomware Attack

Blue Yonder, a supply-chain software management platform and independent subsidiary of Panasonic, was the subject of a ransomware attack in November. The attack affected a wide range of sectors and companies which Blue Yonder provide software management systems for. Most significantly affected were:

  • Morrisons, whose warehouse management system was corrupted – meaning they were unable to track produce and fresh food;
  • Starbucks, as the system managing employee rotas was disrupted; and
  • Pen manufacturing company, BIC.

Similarly to the Finastra breach described above, data which had been obtained through the cyber hack appeared on the dark web. The data, which comprised of Blue Yonder employee and customer credentials, was posted on an underground site which had only been operating since October. Although not confirmed, Termite, the hacking group who claimed responsibility for the attack, announced that 680GB of data was compromised and stolen.

The breach highlighted the widespread effect these hacks can have outside of the company itself where they are a key supplier to other businesses, and serves as another example of the growing market for stolen data on the dark web.


[1] EUR-Lex – 62023CJ0159 – EN – EUR-Lex

[2] https://www.gov.uk/government/publications/assuring-a-responsible-future-for-ai/assuring-a-responsible-future-for-ai

[3] https://www.gov.uk/government/publications/assuring-a-responsible-future-for-ai/assuring-a-responsible-future-for-ai

[4] https://www.gov.uk/government/consultations/ai-management-essentials-tool

[5] https://www.gov.uk/government/news/game-changing-tech-to-reach-the-public-faster-as-dedicated-new-unit-launched-to-curb-red-tape