High court tells UK lawyers to stop using AI after fake case-law citations

14 hours ago 9

The high court has told senior lawyers to take urgent action to prevent the misuse of artificial intelligence after dozens of fake case-law citations were put before the courts that were either completely fictitious or contained made-up passages.

Lawyers are increasingly using AI systems to help them build legal arguments, but two cases this year were blighted by made-up case-law citations which were either definitely or suspected to have been generated by AI.

In a £89m damages case against the Qatar National Bank, the claimants made 45 case-law citations, 18 of which turned out to be fictitious, with quotes in many of the others also bogus. The claimant admitted using publicly available AI tools and his solicitor accepted he cited the sham authorities.

When Haringey Law Centre challenged the London borough of Haringey over its alleged failure to provide its client with temporary accommodation, its lawyer cited phantom case law five times. Suspicions were raised when the solicitor defending the council had to repeatedly query why they could not find any trace of the supposed authorities.

It resulted in a legal action for wasted legal costs and a court found the law centre and its lawyer, a pupil barrister, were negligent. The barrister denied using AI in that case but said she may have inadvertently done so while using Google or Safari in preparation for a separate case where she also cited phantom authorities. In that case she said she may have taken account of AI summaries without realising what they were.

In a regulatory ruling responding to the cases on Friday, Dame Victoria Sharp, the president of the King’s bench division, said there were “serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused” and that lawyers misusing AI could face sanctions, from public admonishment to facing contempt of court proceedings and referral to the police.

She called on the Bar Council and the Law Society to consider steps to curb the problem “as a matter of urgency” and told heads of barristers’ chambers and managing partners of solicitors to ensure all lawyers know their professional and ethical duties if using AI.

“Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,” she wrote. “The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”

Ian Jeffery, the chief executive of the Law Society of England and Wales, said the ruling “lays bare the dangers of using AI in legal work”.

“Artificial intelligence tools are increasingly used to support legal service delivery,” he added. “However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review and ensure the accuracy of their work.”

skip past newsletter promotion

The cases are not the first to have been blighted by AI-created hallucinations. In a UK tax tribunal in 2023, an appellant who claimed to have been helped by “a friend in a solicitor’s office” provided nine bogus historical tribunal decisions as supposed precedents. She admitted it was “possible” she had used ChatGPT, but said it surely made no difference as there must be other cases that made her point.

The appellants in a €5.8m (£4.9m) Danish case this year narrowly avoided contempt proceedings when they relied on a made-up ruling that the judge spotted. And a 2023 case in the US district court for the southern district of New York descended into chaos when a lawyer was challenged to produce the seven apparently fictitious cases they had cited. The simply asked ChatGPT to summarise the cases it had already made up and the result, said the judge was “gibberish” and fined the two lawyers and their firm $5,000.

Read Entire Article
Bhayangkara | Wisata | | |