Legislation Update
AI Regulation Bill[1] delays
The UK Government has encountered yet another roadblock in its efforts to introduce the long-anticipated Artificial Intelligence Regulation Bill (the AI Reg Bill) designed to govern the growth and development of AI within the UK. Initially proposed back in 2023, the AI Reg Bill aimed to establish a comprehensive statutory framework for AI regulation by codifying essential principles around safety, transparency and accountability.
Following last year’s election victory, the Labour Government planned to introduce a streamlined version of the AI Reg Bill within months of taking office. However, this has now been postponed until likely 2026 at the earliest. A fine balancing act exists for the Government as it seeks to regulate fast-evolving technology whilst maintaining the UK’s position as a leader in AI innovation. This is despite mounting pressure for stronger regulatory oversight, with a recent survey reporting that 88% of UK citizens would support government intervention when AI poses serious risks[2]. On the basis of the delays, some have speculated that the Government appears to be aligning its approach with US policy, suggesting a preference for a more measured regulatory framework that balances innovation with the necessary safeguards.
The primary driver behind these delays stems from the Government’s ongoing struggle to balance the introduction of comprehensive regulatory oversight against legitimate concerns about stifling innovation within the UK’s thriving technology sector. This tension has resulted in long consultations with industry stakeholders and AI innovators, extending the legislative timeline. Irrespective of market factors, public sentiment strongly supports government intervention in this space, with three-quarters of the respondents (75%) of a recent study believing that AI safety oversight should remain the responsibility of government bodies or established regulatory authorities, rather than delegating this critical function exclusively to private sector companies operating with commercial interests.[3]
However, this prolonged uncertainty has created significant challenges for both AI developers and users alike, as clear regulatory guidance will be important when it comes to the ethical development of AI. Industry bodies have increasingly voiced concerns about the current regulatory vacuum, particularly regarding AI applications deployed across sensitive sectors including healthcare, finance, and the criminal justice system.
To date, the Government has appeared to take a “hands-off” approach to AI regulation to attract international investment and prevent the stifling of technological growth; which appears to be the strategy going forward. By choosing to delay the UK may preserve its place at the forefront of AI development in the short term, but with a longer term risk that it could erode public trust in the legal institutions charged with safeguarding the public against irresponsible use of AI technology.
Data (Use and Access) Act 2025[4]
Background
After much back and forth between the House of Commons and the House of Lords, the highly anticipated Data (Use and Access) Bill has finally achieved Royal Assent on 19 June 2025, becoming the Data (Use and Access) Act 2025 (the “DUAA”). The DUAA introduces significant amendments to UK GDPR, the Data Protection Act 2018, and the Privacy and Electronic Communications Regulations 2003, impacting multiple areas including but not limited to: scientific research; legitimate interests; direct marketing; automated decision-making; and international data transfers. The DUAA enables regulatory transition for open banking to the FCA, facilitating open finance development. The UK Government’s aim for the DUAA is to promote innovation and economic growth whilst protecting the rights of individuals, and to generally “make lives easier”.[5]
Most provisions require specific regulations from the Secretary of State to come into force, and the anticipated smart data protection scheme for open banking/open finance is expected to be one of the first of these regulations. However, certain sections will become immediately effective: Section 78 (reasonable and proportionate searches for data subject access requests); Sections 126-128 (biometric data retention); and provisions relating to energy smart meter communication licences.
The DUAA marks a significant shift in how personal and organisational data can be shared and accessed, with the Act establishing clear rules around who can use data, for what purposes, and under what conditions. Interestingly, the final version of the Act did not include the anticipated and highly debated amendments on AI and copyright, dropping proposals on the transparency requirements for AI training data and copyrighted materials.
Key changes
Open banking/open finance
The DUAA gives the Science and Technology Secretary and HM Treasury the power to introduce new Smart Data schemes through regulations which will specify the scope of a scheme. These regulations will govern: (1) who is required to provide data; (2) what data they are required to provide; (3) how and when they must provide data; and (4) how data is secured and protected. The introduction of Smart Data schemes is intended to create the right conditions to support the future of open banking and the growth of these new smart data schemes.
Automated decision-making
The DUAA creates a less restrictive framework under the UK GDPR for organisations to make decisions based solely on automated processing of personal data (“ADM”). The previous restrictions on ADM, such as the requirement that ADM is necessary for the purposes of a contract between the person and the organisation, have been removed and replaced with appropriate safeguards, such as enabling the person to contest the decision. These safeguards allow organisations to make solely automated decisions in a wider range of circumstances as long as the appropriate safeguards are in place.
However, the restriction on the use of special category data has not been removed, and to use this data for ADM, organisations remain obligated to obtain data subject consent, or only use ADM with safeguards where necessary for reasons of substantial public interest, on the basis of data protection law.
Digital verification
The DUAA creates a regulatory framework for digital verification services (“DVS”), which requires the Secretary of State to set out rules around trusted digital verification tools. Organisations that offer identity verification will receive a “trust mark” and obtain certification in accordance with their relevant DVS trust framework, indicating that they are DVS registered. The aim of these tools is to allow individuals to securely verify their identity online.
Data subject access requests
The DUAA clarifies that the time limit for an organisation to respond to a data subject access request (“SAR”) commences when the organisation receives: (1) a request; (2) any information requested to confirm the requester’s identity (if required); or (3) a fee if this has been requested for responding to a manifestly unfounded or excessive request.
The DUAA also provides scope for extension of the time limit for responses to a SAR, for instance where the request is highly complex or there is a high number of requests, and provides provisions for pausing (“stopping the clock”) on the time limit where controllers can demonstrate clarification is reasonably required to respond to the SAR.
Next steps
The next steps for the DUAA involve the implementation of secondary legislation and regulatory guidance, with the Government expected to publish detailed regulations and codes of practice in the coming months. Amendments to the UK GDPR and the Privacy & Electronic Communications Regulations are also anticipated to come shortly. Organisations should begin preparing for compliance with the new data sharing frameworks, whilst awaiting further clarity on specific implementation timelines and requirements from regulators.
Case Law Update
Wikimedia Foundation and another v Secretary of State for Science, Innovation and Technology[6]
Background and the law
The first claimant, Wikimedia Foundation, a charitable foundation registered in the United States, hosts the collaborative online encyclopaedia Wikipedia. The second claimant is an active user and editor of Wikipedia. Both claimants challenged the Secretary of State’s decision to implement regulation 3, which prescribes thresholds on Category 1 online services under the Online Safety Act 2023[7] (“OSA”).
Category 1 online services under the OSA are user-to-user services that have the highest reach and the highest risk functionalities, resulting in the greatest number of additional obligations under the OSA.
Ofcom conducted research and provided advice to define the OSA thresholds, focusing on content recommender systems and forward/share functionalities linked to viral dissemination, and the Secretary of State adopted Ofcom’s advice when defining these thresholds. The threshold in the OSA for categorising online services into Category 1 are as follows; the service: (1) has an average number of monthly active United Kingdom users that exceeds 34 million and uses a content recommender system; or (2) has an average number of monthly active United Kingdom users that exceeds 7 million, uses a content recommender system, and provides a functionality for users to forward or share regulated user-generated content on the service with other users of that service.
The claimants argued that the thresholds applied in the OSA were too broad and would likely classify Wikipedia as a Category 1 service, disrupting the claimants’ operational model, which relies on anonymity and community-driven moderation. The claimants identified several potential incompatibilities between Wikipedia’s structure and the OSA requirements for Category 1 services, including user verification duties. Verification duties under the OSA allow users to choose only to encounter content from other users whose identity has been verified.
The court considered the following grounds of judicial review: (i) whether the Secretary of State breached paragraph 1(5) of Schedule 11 to the OSA; (ii) whether the decision was irrational; and (iii) whether the decision was incompatible with Articles 8, 10, and 11 of the European Convention on Human Rights. Paragraph 1(5) of Schedule 11 of the OSA requires the Secretary of State, when making the regulations, to take account of the likely impact of the number of users of a service and its functionalities on the ease, speed, and breadth of content dissemination, known as “viral dissemination”.
Judgment
The Administrative Court dismissed the claimants’ judicial review claim of the Secretary of State’s decision to make regulation 3 of the OSA, which prescribes the criteria for Category 1 online services and which were made subject to many statutory duties.
The court held that the Secretary of State had complied with paragraph 1(5), Schedule 11 of the OSA by considering the likely impact of functionalities on viral dissemination. Although there were instances such as the new pages feed where a feature might amount to a content recommender system that did not promote viral dissemination and did not operate in conjunction with a forward and share functionality, this did not amount to a breach of paragraph 1(5).
As to the Ofcom advice, the court found that the content needed to be read in the context of the advice as a whole, and the court concluded that the Secretary of State was well aware of and had considered these complexities when applying the advice. As such, the Secretary of State did not make an irrational decision.
The court also held that the claimants failed to establish “victim status” under the Human Rights Act 1998 on the basis that the claimants did not positively contend that Wikipedia was captured by regulation 3.
Forward-looking implications
Although the court ruled in Ofcom and the Secretary of State’s favour, the court warned in its judgment that Ofcom and the Secretary of State should not implement a regime that would significantly impede Wikipedia’s operations without proportionate justification. The court also outlined that Category 1 designation decisions are public law acts and accordingly are open to judicial review if flawed, suggesting that the court endeavours to hold Ofcom and the Secretary of State accountable when implementing the OSA regime.
While there are a limited number of Category 1 providers, the growth in feeling against the OSA should not be ignored. User verification requirements have already been shown to fundamentally disrupt operational frameworks of large online service providers, to frustration from more organisations than just Wikimedia. Additionally, 4chan has launched legal action against Ofcom in the US, seeking to stop Ofcom enforcing the OSA against them in the US. We’re likely to see similar pushback against the OSA in future.
Codes of Practice
Software Security Code of Practice[8]
In May, the Department of Science, Innovation and Technology issued the UK’s first Software Security Code of Practice (the “Code”). The Code is a voluntary framework that has been created to improve the security and resilience of the software that organisations and businesses rely on. The Code has been developed through consultation with the National Cyber Security Centre, industry experts and key stakeholders.[9]
The Code is applicable to organisations of all different sizes and seeks to help software providers and users reduce the risk and consequences of any supply chain cyber-attacks as well as software reliability failures. The Code is currently voluntary; however, it is likely to become mandatory over the next twelve months as the UK has adopted the approach taken by the EU of having in place a defined regulatory framework when it comes to software security. Comprised of 14 basic principles that are split across four main themes (secure design and development, build environment security, secure deployment and maintenance, and communication with customers), the Code outlines what software vendors are expected to implement to establish a consistent baseline of software security and resilience in their practices across the market.
Although the Code remains voluntary, the direction of travel suggests a future move to mandatory cybersecurity governance. Organisations must now use this as an opportunity to take the initiative and rectify their software security gaps and to align their organisations with the Code before it becomes mandatory or before a cyber incident forces them to play catch-up. By failing to adapt, an organisation’s software and their data remain at risk; and any breach risks eroding the trust of their clients, partners, and data owners. For UK businesses, adopting the principles of the Code now is not just best practice but essential for safeguarding their security.
General-Purpose AI Code of Practice[10]
Background
On 10 July 2025, the European Commission published the final version of the long-awaited General-Purpose AI Code of Practice (the “GPAI Code”). The GPAI Code aims to help providers of general-purpose AI (“GPAI”) models comply with the legal obligations within the EU AI Act[11] (the “AI Act”). The focus of the GPAI Code is on safety, security, and transparency of GPAI models.
The GPAI Code is a voluntary guidance tool which the European Commission have confirmed to be an “adequate tool” for providers of GPAI models to demonstrate their compliance with the AI Act. The GPAI Code was developed by 13 independent experts through the input of over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rightsholders, and civil society organisations.
The European Commission outlines that providers of GPAI models who voluntarily sign the GPAI Code will demonstrate they can comply with the AI Act, reduce their administrative burden, and will benefit from increased legal certainty of their compliance over that of demonstrating compliance through other methods.
The GPAI Code consists of three chapters: (1) Transparency; (2) Copyright; and (3) Safety and Security. The chapters on Transparency and Copyright offer guidance for all GPAI model providers on how to demonstrate their compliance with obligations under Article 53 of the AI Act. The chapters on Safety and Security are only relevant to GPAI model providers who provide the most advanced models and are subject to the AI Act’s obligations for GPAI models with systemic risk under Article 55 of the AI Act.
Transparency
Signatories of the GPAI Code commit to the following transparency measures:
- drawing up and keeping up to date model documentation about the GPAI model;
- establishing a public process of information sharing with the EU’s AI Office; and
- controlling the quality and integrity of information documented on the GPAI model.
Copyright
Signatories of the GPAI Code commit to the following copyright measures:
- draw up, keep up-to-date, and implement a copyright policy;
- reproduce and extract only lawfully accessible copyright-protected content when crawling the World Wide Web;
- identify and comply with rights reservations when crawling the World Wide Web;
- mitigate the risk of copyright-infringing outputs; and
- designate a point of contact and enable the lodging of complaints.
Safety and Security
Signatories of the GPAI Code commit to the following safety and security measures:
- creating, implementing, and keeping updated a state-of-the-art safety and security framework;
- identifying systemic risks stemming from the GPAI model through a structured process and developing systemic risk scenarios;
- analysing each identified systemic risk;
- specifying systemic risk acceptance criteria and determining whether the systemic risks stemming from the GPAI model are acceptable;
- implementing appropriate safety mitigations along the entire GPAI model lifecycle;
- implementing an adequate level of cybersecurity protection for GPAI models and their physical infrastructure along the entire GPAI model lifecycle;
- reporting to the AI Office information about the GPAI model and their systemic risk assessment and mitigation processes and measures by creating a Safety and Security Model Report before placing a model on the GPAI market;
- defining clear responsibilities for managing the systemic risks stemming from their models, allocating appropriate resources for these responsibilities, and promoting a healthy risk culture;
- implementing appropriate processes and measures for keeping track of, documenting, and reporting to the AI Office; and
- documenting the implementation of this safety and security chapter.
Objectives and next steps
The overarching objective of the GPAI Code is to improve the functioning of the internal market, to promote the uptake of human-centric and trustworthy AI, whilst ensuring high-level protections for health, safety, and fundamental rights. To achieve this, the GPAI Code serves as guidance for demonstrating compliance with Articles 53 and 55 of the AI Act and ensuring GPAI model providers comply with their obligations under the AI Act.
The GPAI Code is awaiting endorsement by Member States and the European Commission; once endorsed, the 26 GPAI model providers currently published as having signed up for the scheme, including OpenAI and Google, will be able to demonstrate compliance with the relevant AI Act obligations by adhering to the GPAI Code.
Businesses in the UK can consider identifying whether model providers have signed up to the GPAI Code, and then condensing the fact that this provides some assurance as to those providers’ compliance with the EU AI Act, and therefore the safety, security, and transparency of the models used. However, comprehensive vendor due diligence and implement suitable terms is still important whenever using a third-party AI model supplier.
Consultation Update
Consultation on enterprise connected devices
Earlier this year, the Government launched a consultation (the “Consultation”) for views on the security of enterprise connected devices (also known as Internet of Things or IoT devices).[12] IoT devices are internet-connected tools which are used to store, process, or transmit data, such as laptops, printers, and smart cameras. The goal of the Consultation is to strengthen the cybersecurity of organisations as malicious actors often exploit IoT devices, as they typically do not have strong security features, in order to gain access to a company’s IT systems. Alarmingly, 58% of UK businesses reported that they do not conduct security checks when procuring connected devices.[13]
The Government plans to develop a new voluntary Code of Practice for Enterprise Connected Device Security, created by DSIT and the National Cyber Security Centre as well as incorporating feedback from this consultation process which ran from May until August. This proposed Code of Practice for Enterprise Connected Device Security is built on 11 key security principles and will serve as the basis for several potential policy measures, which are intended to be implemented by manufacturers and purchasers of IoTs. The Government is currently seeking stakeholder input on these proposals and has opened the floor to suggestions on how cybersecurity could be enhanced in this vein. In addition, the Government is also considering a new global standard for enterprise device security as well as the possibility of legislation to ensure compliance if the voluntary uptake proves insufficient.
What comes next?
With cyber-attacks on the rise, this is one threat which the Government is seeking to face head-on. These voluntary reforms could be just the beginning, and if the Government continues this strong approach to cybersecurity, hard regulation could follow as the UK has recognised the cybersecurity risks posed by enterprise IoT. Without safeguards in place, they may well remain a chink in the armour for organisations.
Cybersecurity: M&S, Co-op and Harrods cyber-attacks arrests
This year, the UK retail sector has been under siege from a wave of highly sophisticated cyberattacks. Household names such as Marks & Spencer’s, Co-op, and Harrods have all been hit, with the brands suffering operational disruption, stolen data, and a huge loss in profits.
In July, four individuals were arrested in connection with an investigation into cyber attacks by the National Crime Agency, and the accused face potential charges under the Computer Misuse Act, blackmail, money laundering, and organised crime involvement.[14] The attacks on such recognised brands continue to demonstrate that large companies that may consider themselves to have comprehensive and established security measures still remain vulnerable to determined cybercriminals. These attacks, however, are only the high-profile tip of the iceberg, as the cyber-attack crisis has impacted businesses of all sizes in the UK. Worryingly, in the last twelve months, four in ten businesses have claimed to have been the victim of a cyber-attack.
Healthcare systems have also been under threat, as seen by ransomware attacks in June on King’s College Hospital and Guys and St Thomas’ NHS Foundation Trust. Sadly, a patient’s death was partly caused due to there being a delay in blood test results resulting from a data breach, highlighting the life-threatening consequences cyber-attacks can have on healthcare infrastructure. Educational institutions and financial services have also been hacked, having their valuable data stolen.
Cybercrime is clearly a growing threat, and businesses need more protection than ever before. Businesses should conduct a thorough review of their current cybersecurity provisions and implement training to focus on developing greater cybersecurity awareness. It is imperative that organisations are proactive, and they should conduct regular audits and implement proper reporting mechanisms so that any breaches can either be prevented or dealt with as swiftly as they arise.
UK CMA accuses Microsoft and Amazon of stifling cloud competition
On 31 July 2025, the Competition and Markets Authority (CMA) issued its final report[15] following a nearly three-year investigation into the UK’s cloud infrastructure services market, concluding that Microsoft and Amazon hold “significant unilateral market power” that has generated financial returns far exceeding their capital investments. The report found that Microsoft Azure and Amazon Web Services (AWS) each control 30% to 40% of UK customer spending in the infrastructure-as-a-service market, whilst Google trails significantly with just 5% to 10% market share. The CMA identified that the market has become highly concentrated with substantial barriers to entry and limited customer mobility, stemming from restrictive licensing practices, egress fees, and technical hurdles that create “lock-in” effects, making it costly for customers to switch providers.[16]
The CMA particularly criticised Microsoft for charging rival cloud firms extra to run its software and for maintaining licensing models that incentivise customers to remain within its ecosystem, making alternatives to Azure less attractive. Both Microsoft and Amazon strongly rejected the findings, with Microsoft claiming the CMA has “missed the mark again” and Amazon warning that further investigation could harm the UK’s position as a global tech hub. Google welcomed the action as a “watershed moment” for the UK’s digital economy.
The CMA is now recommending that both companies be investigated under the Digital Markets, Competition and Consumers Act 2024, which grants regulators sweeping powers to impose binding obligations on companies with “strategic market status”, including forced changes to pricing and business practices.
[1] Artificial Intelligence (Regulation) Bill [HL] – Parliamentary Bills – UK Parliament
[2] The Alan Turing Institute – March 2025
[3] UK ministers delay AI regulation amid plans for more ‘comprehensive’ bill | Artificial intelligence (AI) | The Guardian
[4] Data (Use and Access) Act 2025
[5] Data (Use and Access) Bill factsheet: making lives easier – GOV.UK
[6] Wikimedia Foundation (a charitable foundation registered in the United States of America) and another v Secretary of State for Science Innovation and Technology, [2025] EWHC 2086 (Admin)
[8] Software Security Code of Practice – GOV.UK
[9] About the Software Security Code of Practice – NCSC.GOV.UK
[10] The General-Purpose AI Code of Practice | Shaping Europe’s digital future | European Commission
[11] EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act
[12] Call for views on enterprise connected device security – GOV.UK
[13] Call for views on the cyber security of enterprise connected devices – GOV.UK
[14] Retail cyber attacks: NCA arrest four for attacks on M&S, Co-op and Harrods – National Crime Agency
[15] Cloud services market investigation – GOV.UK
[16] UK CMA accuses Microsoft and Amazon of stifling cloud competition | computing