Save content
Have you found this content useful? Use the button above to save it to your profile.
image of robot being pointed at accusingly | accountingweb | The AI buck still stops with the humans
iStock_nicoletaionescu_blaming_bot

The AI buck still stops with the humans

by

As artificial intelligence becomes ever more prevalent it’s getting trickier to tell true information from fake. But when things go wrong it’s no use blaming the bots.

26th Jun 2024
Save content
Have you found this content useful? Use the button above to save it to your profile.

As the saying goes: “A lie can run around the world before the truth has got its boots on.” We’ve witnessed this over the past decade, with the internet driving a rise in fake news and disinformation in the 2010s, until the situation reached a perfect storm during the pandemic. But I wonder if the worst is yet to come. 

There is an unprecedented volume of information bombarding us every day, it is increasingly difficult and time-consuming to distinguish between what is credible and what is fake, and now, artificial intelligence (AI) chatbots are increasingly mediating our access to information.

Like all technology, AI itself is neutral. In fact, in 2024, AI is nothing new or even that exciting anymore – we’ve all been using it for years. If you’ve used Siri, a navigation app, facial recognition to unlock your phone, or Gmail’s spam filter, you’ve used AI. 

AI filters and shapes our world

However, the generative AI (GenAI) entering the mainstream through tools such as ChatGPT has sparked the AI debate again, including among the accountants attending our user conference in May. The big step change is that GenAI doesn’t just make decisions about existing data, it can create new content or data. It does this by learning from massive data sets, spotting patterns and structures, and using those to create new content. 

This means that the chatbots present a view of the world, including facts, figures and analysis calculations, that we can use to make decisions. And it makes sense that instead of using the internet to research the top place to eat in Santorini this summer, for instance, we can ask a chatbot to do this for us. This is quicker and more efficient, and the efficiency and productivity benefits for the workplace are obvious. 

In the race to do more, and work quickly and smartly to support our businesses and clients to make better, data-informed decisions is it any wonder that GenAI tools are the next big thing? But what happens if we can’t trust them? 

And we can’t always. The AI training data sets are pulled from the internet. So they contain the same falsehoods, misinterpretations and ambiguities that the internet does. Plus, the datasets tend to be up to two years old – think about how much has happened in the past two years and consider what is missing. 

Given what we know about the truthfulness of the information already out there on the internet, having lived through the rise of fake news, why would we blindly trust AI chatbots at all? Given they are also taking data from the internet, generative AI has the potential to amplify existing false information. Plus, it can also contribute new falsehoods, through fictions it creates itself (these are called hallucinations). But, bizarrely, it seems like we do trust the bots wholeheartedly.

Egg on their faces

Legal cases have already been thrown out of court because lawyers based their arguments on fictitious case histories that generative AI presented as the truth. In New York in June 2023, a judge fined lawyers $5,000 for presenting fictitious cases in a legal brief. Then in July, lawyers arguing a case in Johannesburg used fictitious details to support their client’s case. Their client had to pay the defendant’s legal fees, but the judge maintained that the egg on the lawyers’ faces was punishment enough. 

(This is fairly ironic given lawyers are typically the first to say you should rather hire them than, for instance, let AI review your legal documents.) 

AI hallucinations

Why do these hallucinations happen? AI uses pattern recognition, not true understanding, to learn from training data sets. The outcome is based on the statistical likelihood of certain words appearing in a certain order and not on the AI truly understanding what’s going on. If the training data is missing information, or includes biased, incorrect or misleading data, the chatbot can’t recognise this and will present it as fact. And it does this in a confident, authoritative way that makes the replies sound authentic. 

So, in the example above, when researching a dinner spot in Santorini, if you used ChatGPT you would have found yourself eating at 2021’s hotspot, because ChatGPT’s training data only goes as far as September 2021. If you had used an alternative chatbot, say, Claude AI, you would have fared somewhat better and at least found out what last year’s top restaurant was – Claude’s data goes as far as August 2023. 

Legal and regulatory implications

A slightly disappointing meal out is one thing. However advising clients based on out-of-date tax laws or incorrect data could significantly impact their tax filings, budgeting, forecasting and investment strategies. At worst, this could have legal and regulatory implications, and at best, you’d have egg on your face, just like the lawyers. 

On the one hand, generative AI is a tool that is quickly becoming part of our daily lives out of necessity. It’s the only way we can keep pace with the constant change in today’s world and deliver excellent service to our clients. Shorter budget cycles and quicker time to make decisions are something I prioritise, but on the other hand, the tool itself acknowledges it is flawed, has no common sense, and should be fact-checked instead of being blindly trusted. 

You can’t blame the AI

“But the AI made a mistake” cannot be a defence when things go wrong. Air Canada discovered this earlier in the year when a court ruled it was liable for incorrect information that a chatbot supplied to a customer. The company argued that the bot “was responsible for its own actions”, which is clearly both laughable and a slippery slope to very dangerous place. A passenger had used the bot to check the refund terms of a ticket. When it came time to claim the refund the airline maintained that refunds could not be granted retroactively – contrary to what the chatbot had said. 

There’s another irony here. If businesses are constantly encouraging us to engage with their chatbots, the very least they could do is ensure the information is correct and then comply with the information supplied. The legal tribunal agreed, and rejected Air Canada’s contention that the bot was a separate legal entity. The airline was forced to pay the refund with interest and fees. 

The buck stops with you

So today it might feel like a lie can circumnavigate the globe and travel to Mars before the truth has even blearily opened its eyes and thought about its first coffee of the day. But businesses should beware that for them the buck stops with them, and they are responsible for what they say, whether or not AI was involved.

Replies (19)

Please login or register to join the discussion.

Tornado
By Tornado
26th Jun 2024 10:38

The first thing I do these days if I think I talking to AI is to ask if I am speaking to a real person. The reply is usually along the lines of "I am your virtual assistant". I then ask to be transferred to a real person which usually instigates one final sentence of resistance but ultimately this type of AI seems to do what is asked and invariably I am put through to a real person within a few seconds who is pleased to talk to me and can understand what it is that I am phoning about.

It is clear that this type of AI is programmed not to argue and do what it is asked to do which is great because the waits between answers and the general slowness of AI to respond is not only time consuming, but can take a long time to just get it to understand what I am phoning about. If AI is going to be used then it needs to be every but as good as humans in communicating like a human otherwise what is the point of it?

I do not look forward to the day where AI is more dominant and requires you to do exactly what IT says, or cuts you off. That is when the machines (or their owners) have total control of our lives posing a greater threat, perhaps, than climate change could ever do. (Apologies for that bit of doom mongering as we all know that Dr Who will come to our rescue in the end).

Thanks (2)
Replying to Tornado:
avatar
By Paul Crowley
26th Jun 2024 12:16

AI has no conscience or perception of right and wrong
That is why it would kill people rather than waste money on prisons, if given the decision on best economic outcome.

Thanks (5)
avatar
By FactChecker
26th Jun 2024 13:31

The BIG threat is not whether you can rely on the 'research' performed in nanoseconds by AI (or how the results are presented by a ChatBot) ... that IS a worry but it's the trade-off we, as humans, have always endured when an invention (e.g. mechanisation) speeds up what a human can do on their own.

I wasn't around for the arrival of the steam engine nor for the replacement of horse-power by cars, but I clearly remember the despair of my teacher as log tables were thrown out on the arrival of slide-rules ... and am glad I never saw how the arrival of calculators must have challenged his sanity.
And with each new 'thing' the same, quite valid, concern arises ... will the next generation lose the ability to form basic judgements on the results - if they have no understanding of the foundations?

So blind acceptance of whatever is output is a massive danger ... bad enough when simply wrong but unchallenged - far worse when deliberately manipulated.

But the BIG issue (as I hinted at the start of this) is where it has become deliberate policy of large organisations - that are supposed to be there to serve 'us' - to use the technology as an excuse to remove all opportunity for communication with another human (via full digital replacement).
Harra wasn't alone in this - just maybe less adept at hiding his intentions - but *that's* the threat!

Thanks (8)
Replying to FactChecker:
avatar
By rememberscarborough
27th Jun 2024 10:42

Blind acceptance never works well as many will find out if the trust all the "facts" being spouted by ALL prospective candidates in the current general election. Maybe we should have a bet on it....

Thanks (2)
Replying to FactChecker:
Tornado
By Tornado
27th Jun 2024 13:30

The Lord said "Go Forth & Multiply"

The adders looked a little worried and told the Lord that they were adders and could not multiply.

The Lord looked at them a little wearily and said "Use Logs"

Thanks (4)
Replying to Tornado:
avatar
By FactChecker
27th Jun 2024 22:25

Nice to know I'm not alone in the geriatric section on here!

Thanks (3)
avatar
By graydjames
27th Jun 2024 09:56

"The AI buck still stops with the humans"

Oh, really, I AM surprised!

Thanks (2)
avatar
By AndyTaylor
27th Jun 2024 10:12

Let's be clear, AI is not intelligence. As Kevin notes "The big step change is that GenAI doesn’t just make decisions about existing data, it can create new content or data"; it just rehashes what has already been done albeit in a creative way... but that is blind creativity measured against what has gone before. No AI tool has the capacity to think outside its data or it's inbuilt algorithms. We have the capacity to do that. Our reaction to each new marketing hype might be different if we just saw it as say a new type of spanner. The problem comes when unwarranted reliance is placed on that unthinking spanner. In my view all AI is just a tool that makes guesses from a large pool of data. If your personal circumstances don't happen to lie within the majority chunk of that data you will be ill-served by that AI. In that sense it should be treated with as much care as other useful, but dangerous, tools. Maybe a loaded gun; useful in certain circumstances but a lot depends on why and how it is pointed and/or used. The buck has to stop with the persons (or organisations) doing the pointing.

Thanks (1)
Rob Swan
By Rob Swan
27th Jun 2024 10:27

In this arena accountants and bookkeepers (humans) are here to make DEPENDABLE decisions with real CONSEQUENCES. The conditions under which such decisions are made are 'variable' at best - an incorrect or fraudulent invoice, ever changing tax rules, etc, etc....

AI is not capable of coping reliably with such variability and 'makes stuff up' at will, with no significant checks or balances.

There are - literally - £/$Billions - in AI, hence all the hype, hoping for a return form wide scale application and adoption. In many areas the case has yet to be proven.

I am yet to be convinced of any valid reason for using/employing AI, as it currently stands, in accounting or bookkeeping. Quite correctly, you do so at your own risk.

Thanks (0)
avatar
By rememberscarborough
27th Jun 2024 10:37

Nothing new here. When spreadsheets first started they received a similar sort of praise then people started to realise they weren't infallible . The old saying about "garbage in, garbage out" always rings true so, if people have anything about them, they'll have a rough idea of what the answer is before they ask the question.

AI may be a really useful tool for future generations but, at the end of the day, it is just a tool and humans still have to do some work much as many might not want to...

Thanks (2)
Replying to rememberscarborough:
Rob Swan
By Rob Swan
27th Jun 2024 12:26

I would argue that AI (as we currently know it in this context) is not a 'tool'.

A tool is something you can 'control' in a particular way and 'use' with a degree of skill and/or competence to achieve a desired goal.

AI (in it's current form) is a 'guessing' system with uncertain results. Not sure that qualifies as a 'tool'. In this context, those who use it may be the 'tools'!

Thanks (2)
Replying to Rob Swan:
avatar
By johnjenkins
28th Jun 2024 09:17

As Mark Lee has stated AI needs at least another 5 to 10 years before it can be relied upon to any degree of certainty. The choice, though, is always ours as to whether we decide to use it or not. I can't see Google going out of business too soon.

Thanks (2)
Replying to johnjenkins:
Mark Lee headshot 2023
By Mark Lee
30th Jun 2024 16:48

Wow. John Jenkins seems to be quoting me as an authority with whom he agrees. I'll take that. It may be a first. So far as I can recall, for years, it has felt as if he routinely challenges almost everything I have posted on AccountingWeb. ;-)

Thanks (2)
Replying to bookmarklee:
Rob Swan
By Rob Swan
01st Jul 2024 08:01

Many have been there Mark ;)

Thanks (2)
Replying to bookmarklee:
avatar
By johnjenkins
01st Jul 2024 08:23

There you go. You got a thanks as well.
I only challenge when it appears you try to bring in the "marketing gimmicks", which, as you well know, I'm totally against, in our profession.

Thanks (2)
Replying to johnjenkins:
Rob Swan
By Rob Swan
01st Jul 2024 08:09

One of the problems with AI, particularly in this era of all things Internet and Cloud, we DO NOT have any choice. If you use the internet these days, AI is 'digesting' everything you do - whether you know it or not - and so much of what you are 'fed' is spewed out by AI - whether you know it or not. 'Choice'? No!
I agree with Mark Lee; AI needs another 5-10 years.... During that time it will show what it's actually useful for, what it's not useful for, and legislation will catch up with the excesses. Right now AI is running amok - wild west style.

Thanks (1)
Replying to Rob Swan:
avatar
By johnjenkins
01st Jul 2024 08:26

The reason why it's running amok is because the "marketing" side of things have taken over. Yes AI have had some astounding success in medicine but that doesn't mean to say it will conquer all.

Thanks (1)
Replying to johnjenkins:
Rob Swan
By Rob Swan
01st Jul 2024 12:08

Absolutely John. AI has produced some truly incredible results in specialised areas, medical and elsewhere, but those AI's are not - nor anything like - the kind of 'general' AI that is being touted 'generally' as a solution to ... (a problem which doesn't exist - perhaps).
Although.... to be fair, general AI does seem to be pretty good at exam cheating!

Thanks (0)
Replying to Rob Swan:
avatar
By johnjenkins
02nd Jul 2024 09:50

Interesting how AI have produced some varied photos of some of our politicians!!!!!!!!!!!!!!

Thanks (1)