Save content
Have you found this content useful? Use the button above to save it to your profile.
Robot mistake
iStock_Moor Studio_AW

AI in accounting: Who carries the can?

by

Potential users of the latest wave of artificial intelligence tools to hit the accounting world may first want to ask one fundamental question: who will take responsibility for AI-generated mistakes or bad advice? Spoiler alert: it’s not likely to be the chatbots.

1st Mar 2024
Save content
Have you found this content useful? Use the button above to save it to your profile.

In 1979 IBM, then the world’s most dominant computer platform, produced a slide as part of a larger presentation that seems more prescient than ever before.

IBM accountable
IBM

“A computer can never be held accountable. Therefore a computer must never make a management decision”. Let that sink in for a second: particularly any senior executives from the Post Office who happen to be reading.

Now let’s fast forward to 2024, and in the last few days, UK users have met Sage Copilot, Intuit Assist and Just Ask Xero (JAX), all products designed to act as generative AI-powered personal assistants. 

Moving swiftly past the Pegg and Clippy gags, these are remarkable products that were unthinkable even a few years ago, and in the long-term could change the way human beings interact with all systems, let alone accounting software. 

However, accounting professionals looking to leverage them could do worse than revisiting that IBM slide and ask themselves what could go wrong – and who carries the can if it does. 

Airline chatbot defence fails to take off

The aviation world provided a recent example of where the cold, hard reality of chatbot use can come back to bite a business. Last week a tribunal in Canada ruled that Air Canada was responsible for the output of its own chatbot. 

Deployed to answer questions on the airline’s website, the benevolent bot promised a passenger he could book a full-fare flight for his grandmother's funeral and then apply for a bereavement fare afterwards. However, when he applied the airline said the chatbot had misspoken. The request should have been submitted before the flight – no discount.

The passenger took this to tribunal, where Air Canada argued the chatbot was a "separate legal entity that is responsible for its own actions". And anyway, the passenger should have gone to the link provided by the chatbot, which stated the correct policy.

As you can imagine, the tribunal gave this defence pretty short shrift, with one member stating: "It makes no difference whether the information comes from a static page or a chatbot."

An isolated case, but for anyone looking to pin the potential blame on chatbot makers, to borrow and bastardise a quote from Jon Jenkins, would you blame the builders’ cement mixer if your house fell down? And if a piece of tax software produced a duff calculation, would the blame for any penalties be pinned on the vendor? That defence didn’t work out well for one taxpayer…

All of the above is a bit academic anyway, as I’d imagine you’ll need to sign some pretty water-tight terms and conditions that indemnify the developers before you get your paws on those sweet AI tools.

Ultimate decision lies with the user

To be completely fair to the vendors, they’ve been unequivocal about where these tools sit – in the background and alongside the users, not instead of them. 

I’ve spoken to senior product executives from Sage, QuickBooks and Xero over the past three days, and all of them have said roughly the same thing: the decision-making will ultimately rest with the user, whether that’s accepting (or amending) the auto-categorisation of transactions, pressing send on AI-generated replies, or hitting the button to start an auto-generated pay run.

All this isn’t intended to scare accounting professionals off these tools. In a cash-strapped world where regulation is ratcheting up by the day and new staff are harder to come by than an HMRC helpline operative, anything that takes some of the strain when it comes to admin should be at least considered.

It’s merely to point out that technology in and of itself is never a silver bullet, and any new tool needs to be looked at with a wider lens. Chatbots can’t be held accountable, but accountants can.

On Tuesday 5 March, Tom will be joined by Prysm Financial’s Bhima Hira and Natasha Everard from Bewitching Bookkeeping to talk about the latest developments in AI and find out what’s on their software shopping list this spring. You can watch the session by registering on the show page: Spring accounting software shopping season

Replies (5)

Please login or register to join the discussion.

avatar
By Rob Swan
01st Mar 2024 11:39

Thanks. Nice one Tom.
Good to hear the senior product exec's are so clear on this point.
And... IBM, 1979! Not much has changed... still no solutions for 'Garbage In, Garbage Out!

Thanks (1)
Tornado
By Tornado
01st Mar 2024 12:07

"who will take responsibility for AI-generated mistakes or bad advice?"

If you asked this question to several different AI programs, what would their answers be?

Thanks (1)
Replying to Tornado:
avatar
By BrianL
01st Mar 2024 14:22

This is Gemini's reply.
"Assigning responsibility for AI-generated mistakes and bad advice is a complex issue involving several parties, each playing a crucial role:

1. AI Developers: They hold the initial responsibility for ensuring the AI system is designed and trained responsibly. This includes implementing safeguards against biases, unintended consequences, and ensuring robust functionalities within its intended purpose.

2. AI Operators: These individuals directly interact with the AI system. Their responsibility lies in understanding the system's limitations, using it appropriately, and interpreting its outputs with critical judgment. Essentially, they act as the bridge between the AI and its real-world application.

3. AI Users' Managers: Managers overseeing individuals interacting with AI systems have a responsibility to ensure their teams are adequately trained to use it responsibly. This involves fostering a culture of understanding AI's capabilities and limitations, along with monitoring its use within established guidelines.

4. AI Users' Companies/Employers: Businesses employing AI bear ultimate responsibility for its implementation within their organization. This involves establishing clear guidelines and protocols for AI use, implementing robust risk management strategies, and having clear response plans for potential AI-related incidents. Ultimately, they are accountable for the consequences of their AI implementation.

5. End Users: Individuals directly interacting with AI outputs hold the final level of responsibility. While the system should be designed with safeguards and users should be appropriately informed, it's crucial for individuals to exercise critical thinking and judgment when interpreting AI outputs, particularly when dealing with crucial decisions.

It's important to understand that the responsibility might not always fall on a single entity but could be shared depending on the specific circumstances of the mistake or bad advice. Legal frameworks surrounding AI accountability are still evolving, and further discussions are necessary to establish clear guidelines and regulations for this complex landscape."

Thanks (3)
Replying to BrianL:
Tornado
By Tornado
01st Mar 2024 15:05

This sounds like a reasonable reply but typically of AI, you are presented with a list of options and no specific answer. It seems that lots of people, both individually and severally, would be responsible for the generation of mistakes and bad advice. It would probably take a top lawyer to sort out any problems with that.

As far as I can see, AI presents a load of new/additional complications that would need to be addressed in order to confidently (and legally) rely on the answers given and without any assurance of the accuracy of an AI opinion, what use is that Opinion in the real world?

Is this progress?

Thanks (1)
avatar
By FactChecker
01st Mar 2024 21:00

"I’d imagine you’ll need to sign some pretty water-tight terms and conditions that indemnify the developers before you get your paws on those sweet AI tools" ... mmm, quite possibly if you're only referring to commercial software that happens to claim an AI component.
But most AI systems are available open source, and it doesn't take a lot of resource to 'train' them on your own LLMs (or even simpler to introduce an overlay of different/biased/untrue facts) ... and then release the results out there without any enforceable T&Cs.
.

But the most immediate issue for me is to accept at face value what lies behind the statement:
"To be completely fair to the vendors, they’ve been unequivocal about where these tools sit – in the background and alongside the users, not instead of them."

As a certain Ms Keeler said, "well they would say that, wouldn't they".
First their lawyers will have insisted on that stance (for obvious reasons) - but, and far more importantly, it presupposes that end-users even understand (let alone accept) that responsibility when deciding what to enter and what to do next.

Over the past 25 years I have found myself, from time to time, observing educated/competent adults faced with filling in an online form. And irrespective of whether it is an SA return, a Payroll or something else ... I've noted two common factors:
1. They are unfamiliar and so uncomfortable with the process - but the glaringly large gap in their understanding doesn't stop their determination 'to make it work' (defined as finished, not by reference to accuracy, compliance or any odd professional's view of what matters);
2. They notice they can try different values (and entering them in different places or with different parameters) - which results in different answers (specifically different amounts of tax due).
The mix of inherent greed and laziness that inhabits many a human leaps into action ... and they do whatever both reduces their tax and completes the return, all the while repeating the mantra "It's OK, the computer says so".

Presumably I don't need to draw the lines in detail to compare that scenario to increasingly poorly paid and untrained operatives of AI (whether junior trainees in a practice or the unrepresented taxpayer) ... and wonder whether they too will 'rely' on the most favourable output they can extract from the Bot (whether in terms of least effort or least tax). Indeed they may not be capable of truly checking that output without the personal experience of performing those calculations.

Thanks (4)