
Artificial intelligence producing artificial authorities
Artificial Intelligence (AI) has taken the world by storm and is becoming increasingly more prevalent within the legal sector. Many law firms are embracing AI software in order to improve speed and efficiency within their practice. There are clearly many benefits to this, such as reducing time and costs for firms and their clients alike.
There are, however, inherent risks with the over-reliance on work produced utilising AI. Unfortunately, in recent times there are numerous reported examples in the legal press where use of AI has resulted in, not only litigants in persons but, solicitors and barristers landing in hot water.
A recent example is in the case of Al-Haroun v Qatar National Bank [2025] EWHC 1383. Although the Claimant was legally represented, they prepared their own witness statement with the assistance of AI. The statement contained a number of authorities (which are cases, judgments, statutes or textbooks which are used as a statement of the law) that were fake. The Claimant’s solicitor did not independently verify their client’s legal research.
The Solicitors Regulation Authority gave the warning in their “Risk Outlook report: the use of artificial intelligence in the legal market, 20 November 2023” that AI language models (e.g. ChatGPT) “work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’.” As a result, situations where AI produces erroneous results are “known as ‘hallucination’, where a system produces highly plausible but incorrect results”.
The Court heavily criticised the Claimant and their solicitors in Al-Haroun and the Court considered whether or not contempt of Court proceedings should be commenced. On 6 June 2025, the Divisional Court handed down Judgment discussing this case, together with a similar case: Ayinde, R v The London Borough of Haringey. By way of a summary, in Ayinde a legal professional similarly relied on five different case authorities, all of which were found to be false. In this Judgment, the Court discusses the use of AI tools in the production of written legal arguments/statements of fact.
The Appendix to the Judgment provides summaries of the many cases around the globe where AI has been used unsuccessfully. The following examples are from England and Wales only:
- In SW Harber v Commissions for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC), the appellant (party attempting to appeal the Court’s decision) put forward dates and summaries of nine First-tier Tribunal decisions, which had been provided by “a friend in a solicitor’s office”, however not one of the authorities were real.
- In Olsen v Finansiel Stabilitet A/S [2025] EWHC 42 (KB), the appellants relied on a case summary of a decision allegedly made by the Court of Appeal, however it transpired that the case relied upon was fictitious.
- Conversely in Zzaman v Commissioners for His Majesty’s Revenue and Customs [2025] UKFTT 00539 (TC), the appellant used AI to produce his written arguments which included a number of genuine cases. Although these cases were real, the appellant was criticised because the cases did not “materially [assist]” his case, exemplifying that AI does not always fully understand the results that it produces.
In Al-Haroun and Ayinde, the Court explained that “placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to a contempt. That is because it deliberately interferes with the administration of justice.”
The Court has the power to impose a number of sanctions for the erroneous results produced by AI, as this conduct is considered to mislead the Court. The penalties include the possibility of receiving a adverse costs order against you, requiring you to contribute towards the costs of the other party; striking out a case (either your claim or defence); making a referral to a regulator (in matters where solicitors/barristers are involved); initiating contempt proceedings against you (which can in some circumstances lead to a prison sentence); and possibly even making a referral to the police in the most serious cases.
These cases provide a stark warning to legal professionals and to litigants in person who are engaged in court proceedings. It is imperative that if you are using AI to assist in the preparation of legal arguments/witness statements that you verify the accuracy of the information through the use of recognised legal research platforms and resources.
Whilst there is certainly a place for AI in the legal industry, it must be used to complement, as opposed to substitute, well established working practices. For litigants in person, it is important to be wary of the inherent vulnerabilities and limitations of AI generated documents. No doubt AI will continue to develop over the coming years, however, during its relative infancy the tales of caution should be heeded.
It is anticipated that further guidance will shortly be released in respect of the use of AI within court proceedings The Civil Justice Council have recently announced the creation of a working group with the purpose of examining the use of AI. Consequently, it is anticipated that changes may be made to the Civil Procedure Rules, subject to the outcome of their findings.
If you require any guidance on this subject or for litigation advice please contact Taylor Walton’s commercial litigation team here.
Disclaimer: General Information Provided Only
Please note that the contents of this article are intended solely for general information purposes and should not be considered as legal advice. We cannot be held responsible for any loss resulting from actions or inactions taken based on this article.
Insights
Latest Insights



Request a call back
We’ll arrange a no-obligation call back at a time to suit you.