Case Law Updates
In March 2023, the High Court held that a former employee (Mr Aughton) had infringed the copyright in software owned by one of the claimants. Mr Aughton had also misused confidential information.
Productivity-Quality Systems Inc (PQS) a US company, develops software to assist with quality assurance in manufacturing. Before 2015, PQS carried on business alongside PQ Systems Europe Ltd (PQE) (together PQ). Mr Aughton was a former employee and former director of PQE and had worked on statistical process control (SPC) software for improving industrial processes. Mr Aughton had developed an SPC program called “ProSPC”, which he alleged was created as a hobby project in his spare time and over which he later claimed ownership.
PQ alleged that Mr Aughton copied or otherwise made use of ProSPC when he wrote two further programs, “InSPC v1” and “InSPC v2”, after he left PQE. PQ issued proceedings against Mr Aughton for copyright infringement and breach of confidence in relation to both versions of InSPC.
Findings of the High Court
The issues for Justice Zacaroli were:
- Whether PQ owned the copyright in ProSPC; and
- Whether Mr Aughton copied from it when writing InSPC v1 and InSPC v2 (whether directly or indirectly) after his departure from PQE.
PQ’s case on copying was complicated by the fact that no original source code was available for ProSPC and InSPCv1, having been deleted by Mr Aughton. This made PQ’s case that copying occurred challenging. PQ relied on de-compiled object code for ProSPC and InSPC v1 to obtain a version of source code to compare against the decompiled code of InSPC v2, however it was common ground that a de-compiled code is very different to the original source code. InSPC v2 was also written in C# programming language by Mr Aughton whereas ProSPC and InSPC v1 had been written in VB.NET. As such, PQ’s case relied on circumstantial evidence.
Justice Zacaroli held on the evidence before him, including Mr Aughton’s admissions in an earlier disciplinary hearing and his inconsistent explanations at the trial, that “ProSPC was indeed written in the course of Mr Aughton’s employment such that the copyright and confidential information in it belonged to PQ”. ProSPC was exactly the kind of software that Mr Aughton was engaged to write and his work in writing ProSPC was also integral to PQ’s business. It was also relevant that he had written ProSPC using resources licensed by PQ.
- Employers should clarify any ambiguities around what is considered to be work carried out within the scope of employment. For example, producing work with help from the company’s materials or on any company resources – as seen in this case – despite being used in a ‘non-working’ environment or during ‘non-working’ hours will be considered to be created in the course of employment.
- Employers should ensure that contracts, policies, and job descriptions are kept up to date, avoiding situations where the type of IP being created or the role or extent of IP creation in the role risks not falling within what could reasonably be expected to be in the course of an employee’s employment.
This case is one of a number of actions bought by Craig Wright in the English High Court regarding his status as the purported inventor of Bitcoin. The Australian computer scientist and self-proclaimed Satoshi Nakamoto (the pseudonymous creator of Bitcoin) has had a copyright claim for Bitcoin rejected by a UK Court. Mr Wright claimed that he should be able to block the operation of Bitcoin and the system that forked from it, Bitcoin Cash, because they breach his intellectual property rights.
Judge James Mellor commented that “there is no evidence that the Bitcoin File Format is set out in any part of the software or early blocks written to the Bitcoin Blockchain, as opposed to the Bitcoin Software simply reading and writing files in that format”.
Mr Wright has indicated his intention to appeal.
Bitcoin can’t be treated as literary work and protected by copyright because of its file format with digital transactions that combine to form blocks in a blockchain. The Bitcoin format itself was inherently not fixed but a component part of a wider structure and therefore could not attract copyright in and of itself.
Mr Wright’s claim whether he is really the author of the Bitcoin white paper (the foundational document of Bitcoin which was published in October 2008 by Satoshi Nakamoto) will be the subject of later rulings, which will have a greater impact on the legal status of bitcoin.
Facebook’s owner, Meta, has been fined €1.2bn (£1bn) for mishandling people’s data when transferring it between Europe and the United States. It is the largest fine imposed under the EU’s General Data Protection Regulation privacy law. The decision revolves around the use of standard contractual clauses (SCCs) to move European Union data to the US. These legal contracts, prepared by the European Commission, contain safeguards to ensure personal data continues to be protected when transferred outside Europe. But there are concerns these data flows still expose Europeans to the US’s weaker privacy laws – and US intelligence could access the data.
Andrea Jelinek, Chair of the European Data Protection Board, said: “The EDPB found that Meta IE’s infringement is very serious since it concerns transfers that are systematic, repetitive and continuous. Facebook has millions of users in Europe, so the volume of personal data transferred is massive. The unprecedented fine is a strong signal to organisations that serious infringements have far-reaching consequences.”
Facebook president Nick Clegg said, “this decision is flawed, unjustified and sets a dangerous precedent for the countless other companies transferring data between the EU and US.” Meta says it will appeal against the “unjustified and unnecessary” ruling.
If they have not already done so, organisations should:
- Ensure they have a clear map of their data transfers and have thoroughly risk-assessed them, focussing particularly on transfers to the US;
- Review all transfer risk assessments conducted (and finalise any that are outstanding), especially for US transfers, to make sure they are complete, thorough and up-to-date;
- Consider whether further supplementary measures can be put in place for any US transfers to mitigate the risks as far as possible; and
- Keep a close eye on further developments in this case, which is far from over.
On 1 April 2023, Italy issued a temporary ban of the chatbot over concerns about data transparency and whether it is protecting the data of minors. The Italian Garante launched an investigation into OpenAI on a data breach and lack of age verification to protect younger users from inappropriate generative AI content during registration. However, later that month, the ban was reversed and access to ChatGPT was restored.
Below you will find a timeline of key updates concerning Chat GPT, in which you will see that there has been some considerable movement over the last 6 months.
|25 May 2023
|ChatGPT app is now available in 11 more countries OpenAI announced in a tweet that the ChatGPT mobile app is now available on iOS in the U.S., Europe, South Korea and New Zealand, and soon more countries will be able to download the app from the app store.
|18 May 2023
|OpenAI launches a ChatGPT app for iOS The new ChatGPT app will be free to use, free from ads and will allow for voice input, but will initially be limited to U.S. users at launch.
|3 May 2023
|Hackers are using ChatGPT lures to spread malware on Facebook Meta said in a report that malware posing as ChatGPT was on the rise across its platforms – “In one case, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools”.
|25 April 2023
|OpenAI previews new subscription tier, ChatGPT Business Called ‘ChatGPT Business’, OpenAI describes the forthcoming offering as “for professionals who need more control over their data as well as enterprises seeking to manage their end users.”
|24 April 2023
|OpenAI wants to trademark “GPT” OpenAI applied for a trademark for “GPT,” which stands for “Generative Pre-trained Transformer,” last December. A decision could take up to five months.
|18 April 2023
|FTC warns that AI technology like ChatGPT could ‘turbocharge’ fraud Chairperson of the Federal Trade Commission, Lina Khan, commented, “AI presents a whole set of opportunities, but also presents a whole set of risks,” and that, “we’ve already seen ways in which it could be used to turbocharge fraud and scams.”
|12 April 2023
|Researchers discover a way to make ChatGPT consistently toxic A study co-authored by scientists at the Allen Institute for AI shows that assigning ChatGPT a “persona” — for example, “a bad person” or “a horrible person” — through the ChatGPT API increases its toxicity sixfold, causing it to say more offensive things than it normally would.
|12 April 2023
|Italy gives OpenAI to-do list for lifting ChatGPT suspension order Italy’s data protection watchdog laid out what OpenAI needs to do for it to lift an order against ChatGPT issued at the end of last month.
|1 April 2023
|Italy orders ChatGPT to be blocked The Italian DPA said it’s concerned that the ChatGPT maker is breaching the European Union’s General Data Protection Regulation.
|29 March 2023
|1,100+ signatories signed an open letter asking all ‘AI labs to immediately pause for 6 months’ The letter’s signatories include Elon Musk, among others. The letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
|23 March 2023
|OpenAI connects ChatGPT to the internet OpenAI launched plugins for ChatGPT, extending the bots functionality by granting it access to third-party knowledge sources and databases, including the internet.
|14 March 2023
|OpenAI launches GPT-4, available through ChatGPT Plus GPT-4 is an image and text understanding AI model from OpenAI.
|9 March 2023
|ChatGPT is available in Azure OpenAI service ChatGPT is generally available through the Azure OpenAI Service, Microsoft’s fully managed, corporate-focused offering.
|1 March 2023
|OpenAI launches an API for ChatGPT OpenAI makes another move toward monetisation by launching a paid API for ChatGPT. Snap, Snapchat’s parent company, is among its initial customers.
|7 February 2023
|Microsoft launches the new Bing, with ChatGPT built in The announcement spurred a 10x increase in new downloads for Bing globally, indicating a sizable consumer demand for new AI experiences.
|1 February 2023
|OpenAI launches ChatGPT Plus, starting at $20 per month OpenAI launched a new pilot subscription plan for ChatGPT called ChatGPT Plus, aiming to monetise the technology starting at $20 per month.
|8 December 2022
|ShareGPT lets you easily share your ChatGPT conversations Two developers — Steven Tey and Dom Eccleston — made a Chrome extension called ShareGPT to make it easier to capture and share the AI’s answers with the world.
|30 November 2022
|ChatGPT first launched to the public as OpenAI quietly released GPT-3.5 GPT-3.5 broke cover with ChatGPT, a fine-tuned version of GPT-3.5 that’s essentially a general-purpose chatbot.
In the space of six months Chat GPT has moved a long way from being an intriguing new technology that could have many potential applications for use across industry. It is now the subject of much fearful public speculation as to its potential risks, and the risks posed by AI more generally. The leading experts in the field including executives who manage Open AI, the company that invented Chat GPT, have put their names to a statement released by the Centre for AI Safety on May 30th 2023 which reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. It is very difficult today to know what risks AI might pose in the future and how they might be managed, given the relative newness of the underlying technology.
The challenge will surely be for our political class to work out the best way to regulate AI in such a manner so as to safeguard fundamental human rights while also allowing for innovation. The EU have set out their stall very clearly with their proposal for a Regulation on AI currently going through the EU legislative process. Their proposal involves a risk-based approach to AI and a requirement that all “high-risk” AI applications go through a pre-conformity check before being allowed to be placed on the market. The difficulty posed is that Microsoft has already made it possible to embed Chat GPT in the office applications used by office workers every day. In such a scenario AI in the form of Chat GPT will be used by office workers globally across all industries on a routine basis. How then would the suggested EU approach work in practice? The UK Government has yet to decide on a legislative approach to adopt and should maybe review the success or otherwise of the EU approach, and reflect upon what that might mean for the best approach to adopt in a UK context.