Who is liable for negligent advice produced by AI?
Artificial intelligence (AI) is a fast-paced phenomenon that is making its way into our every day lives, whether we like it or not. As humans, we tend to resist change but there are times where we may just have to embrace it.
AI in the legal industry is exactly one of those changes that is making its way, but in what context and capacity depends on its application.
AI isn’t new to the law. It was used when I started my first legal job out of law school nearly 15 years ago, and likely before then. From personal experience, simple questions were put into a form by a client with a document then produced out of that. A human lawyer then checked the veracity of the document from a grammatical and legal standpoint and, voila – the finished product! Nice and quick, meaning less time and cost for the client. Occasionally, it may produce better work than a human.
AI doesn’t come without its risks though: there is risk that the AI ends up producing wrong answers or lacks impartiality. It is also not best equipped for the more sensitive nature that only a human can really provide.
So, what happens if your lawyer uses AI and produces a piece of work that ends up being wrong? Are you entitled to bring a claim against your solicitor for professional negligence? Who is liable?
Firstly, to succeed in a professional negligence claim, you must establish there is a duty of care, that duty has been breached and that you suffered loss as a consequence of that breach. Foreseeability is also a key factor here. Under the current law, AI itself is unlikely to owe a duty of care to a client, but a solicitor/client relationship usually will. You would then need to identify who breached the duty – was it the AI or the human lawyer checking over the AI’s product? And finally, has loss been suffered caused by the breach? Consideration is also taken as to whether it was foreseeable for the AI to behave in the way that it might have. This is significant because it may cause problems for a claimant if there is a lack of foreseeability resulting in no one being liable.
At the moment, there are no known cases of professional negligence claims against lawyers as a result of advice being generated by AI, but watch this space. Provided the ingredients of a negligence claim can be met, under the current law, the human lawyer is likely to be the liable party. This is because it would have been the lawyer who used AI as an instrument to generate the negligent advice, and no one else. It is extremely unlikely that a Court will deprive a claimant of recovering their losses on the grounds that a machine produced the advice alone. If that were to happen though, it would set a dangerous precedent.
AI in law is still relatively in its infancy despite it being around for some time. This indicates that human lawyers may be susceptible to change or they just simply don’t know how to use it. Time will no doubt change that as new generations come into the law. We just need the law to catch up with the times now!
P.S. This article was written by a human.
Disclaimer: General Information Provided Only
Please note that the contents of this article are intended solely for general information purposes and should not be considered as legal advice. We cannot be held responsible for any loss resulting from actions or inactions taken based on this article.
Insights
Latest Insights
Request a call back
We’ll arrange a no-obligation call back at a time to suit you.