If you remain sceptical about the growing influence of artificial intelligence (AI) in our world, it was notable how some technology market stocks were adversely affected - and even the US President felt compelled to comment - when news broke last week of a new, powerful DeepSeek AI system developed cost-effectively in China.
Indeed, according to Shoosmiths' second annual Litigation Risk 2025 report published last month, from a litigation perspective, AI is considered by many boards to be the biggest area of emerging risk.
When the feedback within the report of 360 general counsel and senior in-house lawyers is analysed further, it's striking that 48% of those surveyed described as 'high risk' the likelihood of employment disputes resulting from the impact of AI on jobs.
Moreover, these respondents, working within the likes of tech, automotive, financial services and property businesses with a £100m+ turnover, believe intellectual property risks arising from the use of generative AI will increase more in the next three years than any other area of dispute.
It would seem we ignore the rise of AI at our peril.
Nevertheless, in a UK business context, you may wonder how the use of AI systems could give rise to litigation.
Put simply, risks can arise from the data used by AI systems, the use of the systems themselves, and the outputs they generate.
For example, AI is used in banking and finance for fraud detection, personalised financial advice and automating customer service. It's applied to online shopping platforms to recommend products based on your browsing and purchase history, while AI algorithms are used in healthcare for analysing medical images to detect early disease and for administrative tasks.
These are arguably some of the many benefits of adopting AI technology. However, AI related issues that can arise and lead to litigation include:
Unauthorised use of copyrighted material: AI systems, especially those used for generating images or text, often require large datasets. If these datasets include copyrighted material without proper licensing or permission, this can lead to legal disputes. For example, visual artists may raise actions against AI developers for using their copyrighted images to train AI models without attribution. Intellectual property infringement: The core of the dispute usually involves allegations that the AI-generated content infringes on the intellectual property rights of the original creators. This can include claims that the AI system reproduces or closely mimics copyrighted works, violating copyright laws.
Such issues arise in the current English case of Getty Images v Stability AI.
Getty Images, a company known for its vast collection of photos, accused Stability AI, a company that creates AI technology, of copyright infringement for using its images without permission to train its AI system (the system can generate new images based on the photos it was trained on).
The main issues in the case are about copyright (using photos without permission), database rights (using a collection of photos without permission), and trademark infringement (using Getty Images' brand without permission).
This case may go to trial in June this year. The court has already acknowledged the technical and practical difficulties of interrogating a huge number of images and copyrighted works.
Among the issues coming to the fore are:
General contractual disputes. Arising from an over-reliance on AI for legal drafting. While AI can increase back-office efficiency and reduce costs to the client, rigorous human led quality control, including careful proof reading and legal analysis is still needed to avoid costly drafting mistakes. AI is currently unable to replace the sound legal judgement and context crucial for ensuring legal compliance and protecting business interests. Employment disputes resulting from AI's impact on jobs. Some UK-based Uber drivers have already raised concerns about the company's use of facial recognition technology for identity verification. Some drivers claimed that the AI system failed to recognise them accurately, leading to claims of racial discrimination and wrongful terminations. Discrimination claims resulting from AI-powered decision-making. Amazon faced criticism after it was revealed that their AI hiring tool was biased against women. The algorithm, trained on resumes submitted over a 10-year period, favoured male candidates for technical roles, leading to gender discrimination. Increased risk of fraud. Cyber criminals are increasingly using AI to undertake sophisticated attacks. For example, AI can generate highly convincing phishing emails and messages by mimicking the writing style of trusted contacts, making it easier to deceive individuals into sharing sensitive information and even transferring large sums of money.
In addition to the above, the increasing application of AI in the business world presents potential litigation risks in terms of disclosure and data protection.
Legal frameworks may require AI developers to disclose detailed summaries of the content used for training their models. Failure to comply with these regulations can exacerbate the risk of IP disputes.
The large quantities of data AI models learn from will require businesses to carefully consider their data protection obligations, as principals of data protection legislation will apply when the data being used to train new AI systems constitutes 'personal data'.
Consequently, as AI development and its application in business soars, there are risks and benefits for its widespread adoption. While the likes of the financial services sector - and wider crime prevention organisations - can embrace such technology to enhance fraud detection capability, there's also no doubt that the prevalence of AI will increasingly keep litigators busy.
And a case (Harber v Revenue and Customs Commissioners) in the legal archives offers a pertinent word of caution for individuals with an over reliance on (unverified) AI-generated content. A taxpayer (appellant) appealed against a penalty for failure to notify liability to capital gains tax. The appellant provided the tribunal with the names, dates and summaries of nine First-tier Tribunal decisions in which the appellant had been successful in showing that a reasonable excuse existed in similar circumstances.
Unfortunately, the cases relied on by the appellant were found to have been generated by an artificial intelligence system such as ChatGPT.
The Tribunal highlighted the harm that was caused by providing fictitious cases, such as causing it and HMRC to waste time and public money and promoting cynicism about judicial precedents.
AI is here to stay, but we must all be vigilant how it's applied.
Seonaid Sandham is a senior associate in the dispute resolution and litigation team at Shoosmiths in Scotland