Chat-botch: Taxpayer’s AI gambit falls flat at tribunalby
A taxpayer lost a tribunal appeal after a judge discovered the nine cases she cited in her defence didn’t exist and had been generated by an artificial intelligence-powered chatbot.
Taxpayer Felicity Harber appealed against an HMRC penalty believing she had a reasonable excuse, and provided the first tier tribunal with the names, dates and summaries of rulings that supported her argument.
On closer inspection, it was revealed that although the summaries contained names and details similar to real reasonable excuse cases, they could not be found on any legal website. The tribunal ruled they had been generated by an artificial intelligence tool such as ChatGPT. The appeal was dismissed.
Failure to check
HMRC issued Harber with a "failure to notify" penalty of £3,265.11 in July 2022 following the sale of a property and failure to notify the tax authority of her liability to capital gains tax (CGT).
Appealing the penalty, not the CGT assessment, Harber put forward two bases on which she believed she had a reasonable excuse: her mental health and an ignorance of the law.
Representing herself, Harber provided the tribunal with the names, dates and summaries of nine first tier tribunal (FTT) decisions in which the appellant had been successful in showing that a reasonable excuse existed. She claimed the cases had been provided by “a friend in a solicitor’s office”.
However, when HMRC’s representative attempted to cross-check the cases on the FTT website she could not find them. On closer examination, the summary text for several of the cases was also almost identical and repeatedly contained the American English spelling of the word ‘favour’ (favor).
‘Highly plausible but incorrect results’
The tribunal spotted similarities between the cases provided by Harber and actual reasonable excuse cases heard by the FTT and published on its website. These include the wording of the summaries, and many of the appellants’ names being similar to that of real names – for example, the FTT has decided 16 reasonable excuse penalty cases in which the appellant's surname was "Smith".
This led the tribunal to consider whether the cases had been generated by an artificial intelligence (AI) tool such as ChatGPT. Guidance from the Solicitors’ Regulation Authority states that such systems produce “highly plausible but incorrect results” and the tribunal also considered the recent American case where two barristers sought to rely on fake cases generated by ChatGPT.
The tribunal asked Harber if the cases had been generated by an AI system, such as ChatGPT. Harber replied that this was "possible" and didn’t provide an alternative explanation. She added that she couldn’t see why this made any difference, as there “must have been other FTT cases in which the tribunal had decided that a person's ignorance of the law and/or mental health condition provided a reasonable excuse”.
Harber then fired back at the tribunal, asking whether they could be confident the cases relied on by HMRC and included in the case were genuine. The tribunal pointed out that HMRC had provided the full copy of each of those judgments, which are available on publicly accessible websites such as that of the FTT and the British and Irish Legal Information Institute (BAILLI). Harber told the tribunal she had been “unaware” of those websites.
Citing invented judgments is ‘a serious and important issue’
Summing up, Judge Anne Redston accepted that Harber had been unaware the AI cases were not genuine and that she did not know how to check their validity.
Harber’s appeal was dismissed, with the judge adding that the outcome would have been the same even without the chatbot-generated cases.
However, Judge Redson added that providing materials “which are not genuine and asking a court or tribunal to rely on them is a serious and important issue".
“Even though misleading a court is likely to have less impact in a tax appeal than in many other types of litigation, that does not mean that citing invented judgments is harmless,” she said.
'At what point do you put your trust in AI and pray it won't lead to a lawsuit?'
Kieran O'Connor, director at the cambridge tax practice, told AccountingWEB that while a consultation with a tax professional should never be substituted by trawling through an AI database, he had some sympathy with the appellant in this case.
"For a long time now, we have been looking for answers by Googling whatever we wanted to find," he said. "We've come to expect that the information will be 'out there' and it's just a question of finding it. So if AI is a tool to find information, what could possibly go wrong?"
"Herein lies the problem with AI," he continued. "It is an indiscriminate search engine in disguise, so it will happily scrape opinions from Twitter, LinkedIn, or wherever else it can find them and present them as facts. Doubtless some of the information it scrapes will be accurate, but there is no way of telling. Worse, if there is a gap in the information, then AI will fabricate whatever is needed to fill that gap."
"It is generally considered that AI is in its infancy and that eventually it will become a very good tool in the tax world. But at what point do you put your trust in AI and pray it won't lead to a lawsuit?"