Save content
Have you found this content useful? Use the button above to save it to your profile.
super computer processing big data using artificial intelligence

The rise of AI in accountancy: Augmentation or replacement?


The latest version of ChatGPT has added fuel to the debate about how artificial intelligence could disrupt the accountancy world. But does the profession really have anything to fear from the rise of AI?

12th Apr 2023
Save content
Have you found this content useful? Use the button above to save it to your profile.

Capable of reading and interpreting vast quantities of data, ChatGPT has already demonstrated its transformative potential. Not only is the latest version of ChatGPT (developed by OpenAI) now smart enough to pass a sample ACA Assurance paper, it can also pass a simulated US bar exam for prospective lawyers, scoring in the top 10%.

So, what does this mean for the future of the accountancy profession and should ChatGPT be viewed as a threat or an opportunity?

The productivity potential

First off, the theory: with the right training, theoretically, ChatGPT is more than capable of automating many of the time-consuming manual tasks that hamper the productivity of today’s accountants. For example, reviewing hundreds of invoices to determine which are allowable and which are not in a matter of minutes, or resolving tax coding problems at speed.

For a profession increasingly being asked to undertake a range of new responsibilities in addition to its traditional computational activities, artificial intelligence (AI) could be the answer to free up the capacity and time needed to tackle newly acquired duties, like the preparation of SAO reports.

Similarly, when it comes to preparing a simple corporate tax return, ChatGPT could competently perform many of the repeatable tasks in minutes rather than hours.

The productivity benefits that this alone could generate are significant. The application of ChatGPT to manual and repetitive tasks that require minimal accountancy skills or tax know-how could prove highly beneficial.

Augmentation not replacement

However, as of today, the answers provided by ChatGPT -- and therefore its overall output -- is not 100% accurate. This means that it will need a lot of fine-tuning and some careful consideration in relation to which tasks it’s given to tackle.

Another complexity is around how to identify when it does get something wrong. Out-of-the-box, ChatGPT simply states an answer and a human either needs to trust that (sometimes, at their peril!) or check it, which defeats the point.

Early applications of this could, therefore, include sensible “guard rails”, such that in the percentage of instances where it is unable to resolve or arrive at a definitive answer, it flags for the handling of a human specialist, for example.


Given all of the above and the complexity of government compliance systems, it is unlikely ChatGPT will be able to do complex tax returns anytime soon or make informed decisions on grey areas where human oversight, experience and expert judgment are needed. Plus, there’s the issue of how to go about vetting any complex determination made by a technology such as ChatGPT.

What's more likely, therefore, is that instead of sweeping away the accountancy profession, ChatGPT could act as an 'AI co-worker', used to tackle everyday tasks in ways that are highly efficient and cost-effective, as well as analyse vast tranches of data.

Looking ahead

Tools like ChatGPT are already augmenting activities and having an impact on everyday work scenarios. Plus, it’s a technology that’s affordable enough to be harnessed by time-strapped professional teams in firms, large and small.

Even with the best training it’s unlikely to ever be 100% correct and the tax world will increasingly be looking to HMRC for guidance on what an acceptable tolerance might be for tax returns going forward – 98% or 99% correct, for example?

Until then, ChatGPT looks set to further enrich many of the software tools and platforms professionals already depend upon to work faster and more effectively.

Replies (5)

Please login or register to join the discussion.

By Hugo Fair
12th Apr 2023 13:52

I'm no expert in this area, but I do a LOT of reading about it in serious science journals and disagree with the opening 10 words of the article:
"Capable of reading and interpreting vast quantities of data, ChatGPT .."

To be precise, the word 'interpreting' is misleading.
ChatGPT does indeed start with a lot of reading but it then moves on to predictive capabilities (in essence forecasting the most likely next word/phrase in the given context), before using a quite separate (and stunning) ability to format the result into a (user-selected) style of human grammar.
That first step is impressive (as is Google Search) - and the final step is what gives journalists lots of room to anthropomorphise what is simply a set of algorithms that could no more interpret (in the sense of understand) a piece of music than a non-sentient rock could.

A few recent quotes:
* Mhairi Aitken (Alan Turing Institute):
"AI will always be programs that do what they're programmed to do. What they're programmed to do is to mimic human language or outputs, and they're getting increasingly good at it. So it becomes increasingly convincing (but is not AGI - artificial general intelligence)."

* Microsoft spokesperson:
"Our development of AI is centred on amplifying, augmenting and assisting human productivity and capability. We are creating platforms and tools that, rather than acting as a substitute for human effort, can help humans with cognitive work."

* OpenAI spokesperson:
"AI is still prone to 'hallucination' - the phenomenon where an AI, in response to prompts, will produce convincing statements that are actually inaccurate or totally false."

In short to imbue any AI, let alone the chat variants, with a human-like ability to understand and interpret may be the dream of some ... but it's not around the corner (and may never be).

Thanks (5)
Replying to Hugo Fair:
By ourpetsheadsarefallingoff
18th May 2023 16:29

Hugo, I know I'm a bit late, but as a regular reader of your comments I'd love to hear your thoughts on my hypothesis - that LLM tech is already very close to being able to replace the majority of junior and senior roles.

With modern clients now typically doing their own bookkeeping via cloud software, the senior's role is to identify errors of entry, query the solutions with the client, and post the corrected data into statutory format.

If accounting software was embedded with an LLM tailored to accountancy, could it not easily assess bookkeeping data for what "looks wrong" (based on comparatives or sector norms via trained data), query with the client in plain English, then produce reasonably correct draft accounts for a partner review? It could then take review notes from the partner and produce final accounts. No more difficult for the partner or the client than dealing with a human senior, and much cheaper.

Personally, I think we could only be a few years away from practices being heavily downsized due to AI.

Thanks (0)
By paul.benny
12th Apr 2023 15:00

I see a risk that man-down-the-pub as advisor will be replaced by ChatGPT-generated guidance that sounds credible and authoritative. But may still be wrong.

And because it sounds credible and authoritative, we may accept too readily accept output from AI as being right.

Thanks (1)
Replying to paul.benny:
Mark Lee headshot 2023
By Mark Lee
13th Apr 2023 18:00

paul.benny wrote:

I see a risk that man-down-the-pub as advisor will be replaced by ChatGPT-generated guidance that sounds credible and authoritative. But may still be wrong.

I agree Paul. That is very likely. Doctors have long had to endure patients quoting Dr Google. The complexity of accounts and tax hasn't enabled that many clients to quote google search results to their accountants. One of the things they can already do with ChatGPT is ask for advice written so that it can be understood by a 12 year old (for example). But it may not be correct (yet).

Thanks (0)
By moneymanager
13th Apr 2023 11:44

The reason that we have successful taxpayer appeals to both HMRC's internal decision and arbitration processes and the FTT is that HMRC itself does not understand either its own practice or the law, the latter of which is only ever determined by judges who in trun fail which is ewhy we have higher the higher courts, what makes anyone think that a computer, how ever remotely but still in origination the product of a human mind, would be any better?

Thanks (0)