Save content
Have you found this content useful? Use the button above to save it to your profile.
AI Chatbot Error, Technology Glitch and Mistake, Virtual Assistant Concept
iStock_tommy_AW_AIfail

Chat-botch: Taxpayer’s AI gambit falls flat at tribunal

by

A taxpayer lost a tribunal appeal after a judge discovered the nine cases she cited in her defence didn’t exist and had been generated by an artificial intelligence-powered chatbot.

13th Dec 2023
Save content
Have you found this content useful? Use the button above to save it to your profile.

Taxpayer Felicity Harber appealed against an HMRC penalty believing she had a reasonable excuse, and provided the first tier tribunal with the names, dates and summaries of rulings that supported her argument.

On closer inspection, it was revealed that although the summaries contained names and details similar to real reasonable excuse cases, they could not be found on any legal website. The tribunal ruled they had been generated by an artificial intelligence tool such as ChatGPT. The appeal was dismissed.

Failure to check

HMRC issued Harber with a "failure to notify" penalty of £3,265.11 in July 2022 following the sale of a property and failure to notify the tax authority of her liability to capital gains tax (CGT).

Appealing the penalty, not the CGT assessment, Harber put forward two bases on which she believed she had a reasonable excuse: her mental health and an ignorance of the law.

Representing herself, Harber provided the tribunal with the names, dates and summaries of nine first tier tribunal (FTT) decisions in which the appellant had been successful in showing that a reasonable excuse existed. She claimed the cases had been provided by “a friend in a solicitor’s office”.

However, when HMRC’s representative attempted to cross-check the cases on the FTT website she could not find them. On closer examination, the summary text for several of the cases was also almost identical and repeatedly contained the American English spelling of the word ‘favour’ (favor). 

‘Highly plausible but incorrect results’

The tribunal spotted similarities between the cases provided by Harber and actual reasonable excuse cases heard by the FTT and published on its website. These include the wording of the summaries, and many of the appellants’ names being similar to that of real names – for example, the FTT has decided 16 reasonable excuse penalty cases in which the appellant's surname was "Smith".

This led the tribunal to consider whether the cases had been generated by an artificial intelligence (AI) tool such as ChatGPT. Guidance from the Solicitors’ Regulation Authority states that such systems produce “highly plausible but incorrect results” and the tribunal also considered the recent American case where two barristers sought to rely on fake cases generated by ChatGPT.

The tribunal asked Harber if the cases had been generated by an AI system, such as ChatGPT. Harber replied that this was "possible" and didn’t provide an alternative explanation. She added that she couldn’t see why this made any difference, as there “must have been other FTT cases in which the tribunal had decided that a person's ignorance of the law and/or mental health condition provided a reasonable excuse”.

Harber then fired back at the tribunal, asking whether they could be confident the cases relied on by HMRC and included in the case were genuine. The tribunal pointed out that HMRC had provided the full copy of each of those judgments, which are available on publicly accessible websites such as that of the FTT and the British and Irish Legal Information Institute (BAILLI). Harber told the tribunal she had been “unaware” of those websites.

Citing invented judgments is ‘a serious and important issue’

Summing up, Judge Anne Redston accepted that Harber had been unaware the AI cases were not genuine and that she did not know how to check their validity.

Harber’s appeal was dismissed, with the judge adding that the outcome would have been the same even without the chatbot-generated cases. 

However, Judge Redson added that providing materials “which are not genuine and asking a court or tribunal to rely on them is a serious and important issue".

“Even though misleading a court is likely to have less impact in a tax appeal than in many other types of litigation, that does not mean that citing invented judgments is harmless,” she said.

'At what point do you put your trust in AI and pray it won't lead to a lawsuit?'

Kieran O'Connor, director at the cambridge tax practice, told AccountingWEB that while a consultation with a tax professional should never be substituted by trawling through an AI database, he had some sympathy with the appellant in this case.

"For a long time now, we have been looking for answers by Googling whatever we wanted to find," he said. "We've come to expect that the information will be 'out there' and it's just a question of finding it. So if AI is a tool to find information, what could possibly go wrong?"

"Herein lies the problem with AI," he continued. "It is an indiscriminate search engine in disguise, so it will happily scrape opinions from Twitter, LinkedIn, or wherever else it can find them and present them as facts. Doubtless some of the information it scrapes will be accurate, but there is no way of telling. Worse, if there is a gap in the information, then AI will fabricate whatever is needed to fill that gap."

"It is generally considered that AI is in its infancy and that eventually it will become a very good tool in the tax world. But at what point do you put your trust in AI and pray it won't lead to a lawsuit?"

Replies (18)

Please login or register to join the discussion.

RLI
By lionofludesch
13th Dec 2023 12:35

Bless me. Are these the depths to which we have sunk?

Thanks (4)
avatar
By johnjenkins
14th Dec 2023 09:26

It used to be a man in the pub. It's now a friend in a solicitors office. What next, an ex minister (PM or otherwise) perhaps.

Thanks (0)
Replying to johnjenkins:
avatar
By Yossarian
14th Dec 2023 09:39

"I refer you, m'lud, to the case of Cameron v Luke Skywalker......"

Thanks (4)
avatar
By SuperAccountingSteve
14th Dec 2023 09:36

The late Charlie Munger, a wise man, was very skeptical about bitcoin (& similar) and also thought that AI (what it can do/its level of advancement) was being exaggerated. Its not intelligence we are witnessing, just the next level of 'googling' using math.

Thanks (4)
avatar
By listerramjet
14th Dec 2023 09:44

A fascinating insight into an arbitrary process, but I am still none the wiser. I know that ignorance of the law is no excuse, reasonable or otherwise. but I would have thought that “mental health issues” might be. I wonder if those issues were a result of the assessment? Perhaps chatgpt might throw some light onto this?

Thanks (0)
Replying to listerramjet:
avatar
By richard thomas
14th Dec 2023 13:41

Ignorance of the law can be an excuse - see Perrin in the Upper Tribunal. And mental health issues *can* be, as Judge Redston held in another case, as did I in E v HMRC.

Thanks (0)
By Duggimon
14th Dec 2023 09:49

It would help if we'd stop calling GPTs AI, they're not AI, there's no intelligence in them. There is no thought process, it's more like a sophisticated version of opening a smartphone messaging app and mashing the middle option for what it guesses you're going to say.

You can't guarantee that the results from it will ever be accurate because there is no "it" to consider what it's saying, it just generates text based on sophisticated predictions of what ought to come next in a dialogue.

The results seem astonishingly close to a real dialogue with a thinking entity, but that small gap is, on the back end, a massive chasm because there is no entity, there is no facility within it for sense checking and the mechanism by which it does what it does will not allow it, the closest they could get would be a second system to sit on top of the one doing the generating to run a fact check on it, and that would be a whole new and also extremely complex machine that would still not be intelligent.

Thanks (6)
Replying to Duggimon:
avatar
By johnjenkins
14th Dec 2023 10:33

As I said a glorified calculator and we all know what happens when our fingers slip.

Thanks (2)
avatar
By LANCEARMPONG
14th Dec 2023 09:58

Kramer vs Kramer, always the go to in law exams.

Thanks (0)
avatar
By Casterbridge Hardy LLP
14th Dec 2023 11:44

Ho ho! Is it 01 April 2024 already?

Thanks (2)
avatar
By richard thomas
14th Dec 2023 13:44

Lord Justice Birss, deputy head of civil justice, has just circulated (no doubt partly at least as a result of Anne's decision) advice to all judges about needing to look out for AI generated citations and summaries of case law. This one will run and run.

Thanks (2)
Replying to richard thomas:
Tom Herbert
By Tom Herbert
14th Dec 2023 15:04

Thanks Richard. If you were still judging cases, what would your approach to this be? Or is it such a new phenomenon it may take time to see how this unfolds in tribunal submissions?

Thanks (0)
Replying to TomHerbert:
avatar
By johnjenkins
14th Dec 2023 15:16

Problem is Tom, you only a clever dicky to win a case, then its good night nurse.

Thanks (0)
Replying to TomHerbert:
avatar
By richard thomas
14th Dec 2023 16:12

The same as Anne Redston's.

Thanks (2)
avatar
By Caber Feidh
14th Dec 2023 14:48

Lord Justice Birss, who specialises in intellectual property law, is on record as using AI to draft part of one of his judgments-but it was a summary of an area of law with which he was familiar.

Who, in their right mind, would cite judgments they did not already know or had not read - it is easy enough to do so via www.bailii.org. I am fairly confident that I have seen such advice on AccountingWEB, more than once, over recent years.

There are a couple of US lawyers who were censured and fined by a Judge for using AI to create their submission—and then failing to note that none of its citations existed, even when they had been questioned. [Added in editing, by cutting and pasting from a 22 June 2023 CNBC article] Judge P. Kevin Castel said that the attorneys, Peter LoDuca and Steven Schwartz, “abandoned their responsibilities” when they submitted the A.I.-written brief in their client’s lawsuit against the Avianca airline in March, and “then continued to stand by the fake opinions after judicial orders called their existence into question.” The judge said he might not have sanctioned the attorneys if they had come “clean” about Schwartz using ChatGPT to create the brief.

Judge Castel did add that “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance, ... but existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

Thanks (1)
Replying to Caber Feidh:
avatar
By FactChecker
14th Dec 2023 14:56

Only "censured and fined"?
Sounds like they should've had their brains serviced (or replaced with AI).

Thanks (1)
Replying to FactChecker:
Pile of Stones
By Beach Accountancy
15th Dec 2023 17:22

Replacing their brains with AI would be an upgrade

Thanks (0)
Replying to Beach Accountancy:
avatar
By Caber Feidh
15th Dec 2023 17:36

And AI would be an upgrade that continues to upgrade with the passage of the years, while their brains will only continue to degrade. Plus, AI will become ever cheaper.

Thanks (0)