Save content
Have you found this content useful? Use the button above to save it to your profile.
Glowing red and yellow error message overlaid on programming language source code text

Academics apologise for AI blunder implicating Big Four


A group of academics has apologised to the Big Four after it was revealed they had used material generated by artificial intelligence to implicate the firms in non-existent scandals via a submission to the Australian Parliament.

6th Nov 2023
Save content
Have you found this content useful? Use the button above to save it to your profile.

You might think Big Four duo KPMG and Deloitte have enough on their plate defending the integrity of their existing services without having to fend off imaginary cases of wrongdoing dreamed up by an over-zealous artificial intelligence (AI) bot.

However, this is exactly what they’ve had to deal with as part of an Australian Parliamentary inquiry into the ethics and professional accountability of the consultancy industry – triggered by the leak of sensitive government tax plans by PwC Australia staff.

Erroneous evidence

A group of accounting academics submitted evidence to the inquiry arguing for more public accountability and tightened regulation of the Big Four, including a structural split of their auditing and consulting wings, and a new independent regulator for the accounting profession.

Unfortunately for the scholars, part of the evidence accused KPMG and Deloitte of involvement in several cases that were either non-existent or with which the firms in question had no connection. When questioned about the erroneous evidence, the academics admitted that part of it had been generated by an AI program, and not cross-checked for accuracy before the submission was made. 

The evidence accused KPMG of being involved in the “KPMG 7-Eleven wage theft scandal” – a scandal that never took place – and also charged the Big Four firm with auditing the Commonwealth Bank during a financial planning scandal, despite KPMG having never acted as auditors for the bank.

Deloitte was flagged in the non-existent “National Australia Bank [NAB] financial planning scandal”, where the AI bot wrongly accused the firm of advising the bank on a scheme that defrauded customers of millions of dollars, and also of falsifying the accounts of Patisserie Valerie – a real case that involved Grant Thornton and KPMG, but not Deloitte.

Regrettable errors

In a supplementary submission to the inquiry, Professor James Guthrie, who had been part of the group of academics preparing the submission, offered an unreserved apology to the firms for the errors and took full responsibility for them. 

He admitted to using the AI-powered Google Bard Language model generator to assist with the submission, stating there had been “much talk about the use of AI, particularly in academia, with much promise of what it holds for the future and its current capabilities. 

“This was my first time using Google Bard in research,” he said. “I now realise that AI can generate authoritative-sounding output that can be incorrect, incomplete or biased.”

The inaccuracies have now been removed from the submission.

Guthrie concluded his apology by stating that, while the factual errors were “regrettable”, the substantive arguments for change and the academics’ recommendations for reform remain unchanged.

Important questions about AI use

In a statement responding to the incident, the committee said it had “raised important questions” about the use of AI.

“Emerging tools within the artificial intelligence space, while appearing to reduce workload, may present serious risks to the integrity and accuracy of work if they are not adequately understood or applied in combination with detailed oversight and rigorous fact-checking,” said the statement.

The case is believed to be the first time a Parliamentary committee has had to engage with the use of generative AI in inquiry submissions, which in Australia, as well as the UK, are covered by Parliamentary privilege and free from defamation action.

In a formal complaint to the committee, KPMG Australia’s chief executive Andrew Yates stated that the firm’s reputation was being unfairly undermined.

“We are deeply concerned and disappointed that AI has been relied upon, without comprehensive fact-checking, for a submission to such an important Parliamentary inquiry,” said Yates. “The livelihoods of the more than 10,000 people who work at KPMG [Australia] can be affected when obviously incorrect information is put on the public record – protected by Parliamentary privilege – and reported as fact.”

Unrefined AI tools

Back in the real world, KPMG has found itself hit with a string of disciplinary measures from the regulators over the past 12 months, including a record £21m fine for serious breaches in the audit of collapsed construction company Carillion, a £1m penalty for the audit of the high-street arts, crafts and books retailer The Works, and a £1.25m sanction for audit breaches in the financial statements of LED manufacturers Luceco plc.

Deloitte, meanwhile, ended 2022 with a £900,000 fine from the accountancy watchdog over audit failures of SIG plc and recently landed in hot water with the Ontario regulator after several employees altered computer clocks to “backdate” audits.

Given that many of the generative AI tools available in the current marketplace rely on the input of large volumes of data, it’s perhaps easy to understand how these relatively unrefined, generalist tools put two and two together and came up with 22.

However, for those responsible for checking its output and submitting it to one of the highest authorities in the land, it’s less understandable.

Replies (10)

Please login or register to join the discussion.

Mark Lee headshot 2023
By Mark Lee
07th Nov 2023 10:00

Dreadful example of how AI sometimes makes things up - AT THE MOMENT. The large language models are still learning and will only get BETTER. They will never again be as stupid as they are now (and they're not that stupid generally).

I find it helps to think of ChatGPT as if it is a bright and eager-to-please 15 year old with access to a world of data.

It tries to help but needs very clear and specific instructions. It occasionally makes things up when It can’t easily find the facts you have requested, and it hopes you won’t notice. So you always need to check what it produces. You can then ask it if it is sure and to check its sources.

Many people are disappointed with what Chat-GPT produces but that’s often because they didn’t provide enough context for their request.

Thanks (1)
Replying to bookmarklee:
By johnjenkins
09th Nov 2023 10:50

I get your enthusiasm Mark but it's the same old SISO and that cannot be changed, because quite simply the input is either a determination or an interpretation but very few actual facts. Why is that, Mark? It's because everything we have learned about most things is changing, which means those changes will change. For processing loads of data, yes every time

Thanks (0)
Replying to bookmarklee:
By Ken Moorhouse
09th Nov 2023 16:38

Mark Lee: I regret to disagree for the simple reason that AI produces a considerable volume of factually inaccurate content in such small amounts of time that inevitably it will be fed back in to 'the system' until it is impossible to look back to determine what is true and what is not.

So on that basis the learning will get worse, not better, as there is no current method of 'watermarking' what has been generated by AI and that which has been properly researched. There are sayings about taking shortcuts which should be heeded here.

Thanks (1)
By Justin Bryant
07th Nov 2023 11:47

It's a bit like when people who complain of bad grammar/spelling themselves use bad grammar/spelling. Incidentally, I hear good grammar/spelling is the only currently reliable thing about ChatGPT etc. (which makes it easier to spot unless perhaps you deliberately add bad grammar/spelling to disguise it).

Thanks (1)
Replying to Justin Bryant:
By SteveHa
07th Nov 2023 15:38

Why are you dividing "grammar" by "spelling", and what's the answer?

Thanks (0)
Replying to SteveHa:
By Justin Bryant
07th Nov 2023 16:25

Divide and conquer is my philosophy.

Thanks (1)
paddle steamer
07th Nov 2023 15:46

Yet another example that Douglas Adams was correct about everything.

Thanks (3)
By Calculatorboy
08th Nov 2023 21:28

Come on , it is so obviously artificial and absolutely useless except for some basic rudimentary tasks , and will be so for a long long time after we are all dead...the main danger is from the idiots relying on it.

Thanks (1)
By Rob Swan
09th Nov 2023 12:55

Just the current new shiny thing. Always 'Great!!' until something wrong goes.

Thanks (0)
Pile of Stones
By Beach Accountancy
09th Nov 2023 15:13

Millenium Bug
Sarbanes Oxley
360 Reviews
Crypto Currency

all will be "pining for the fiords"

Feel free to add other examples!

Thanks (1)