Academics apologise for AI blunder implicating Big Fourby
A group of academics has apologised to the Big Four after it was revealed they had used material generated by artificial intelligence to implicate the firms in non-existent scandals via a submission to the Australian Parliament.
You might think Big Four duo KPMG and Deloitte have enough on their plate defending the integrity of their existing services without having to fend off imaginary cases of wrongdoing dreamed up by an over-zealous artificial intelligence (AI) bot.
However, this is exactly what they’ve had to deal with as part of an Australian Parliamentary inquiry into the ethics and professional accountability of the consultancy industry – triggered by the leak of sensitive government tax plans by PwC Australia staff.
A group of accounting academics submitted evidence to the inquiry arguing for more public accountability and tightened regulation of the Big Four, including a structural split of their auditing and consulting wings, and a new independent regulator for the accounting profession.
Unfortunately for the scholars, part of the evidence accused KPMG and Deloitte of involvement in several cases that were either non-existent or with which the firms in question had no connection. When questioned about the erroneous evidence, the academics admitted that part of it had been generated by an AI program, and not cross-checked for accuracy before the submission was made.
The evidence accused KPMG of being involved in the “KPMG 7-Eleven wage theft scandal” – a scandal that never took place – and also charged the Big Four firm with auditing the Commonwealth Bank during a financial planning scandal, despite KPMG having never acted as auditors for the bank.
Deloitte was flagged in the non-existent “National Australia Bank [NAB] financial planning scandal”, where the AI bot wrongly accused the firm of advising the bank on a scheme that defrauded customers of millions of dollars, and also of falsifying the accounts of Patisserie Valerie – a real case that involved Grant Thornton and KPMG, but not Deloitte.
In a supplementary submission to the inquiry, Professor James Guthrie, who had been part of the group of academics preparing the submission, offered an unreserved apology to the firms for the errors and took full responsibility for them.
He admitted to using the AI-powered Google Bard Language model generator to assist with the submission, stating there had been “much talk about the use of AI, particularly in academia, with much promise of what it holds for the future and its current capabilities.
“This was my first time using Google Bard in research,” he said. “I now realise that AI can generate authoritative-sounding output that can be incorrect, incomplete or biased.”
The inaccuracies have now been removed from the submission.
Guthrie concluded his apology by stating that, while the factual errors were “regrettable”, the substantive arguments for change and the academics’ recommendations for reform remain unchanged.
Important questions about AI use
In a statement responding to the incident, the committee said it had “raised important questions” about the use of AI.
“Emerging tools within the artificial intelligence space, while appearing to reduce workload, may present serious risks to the integrity and accuracy of work if they are not adequately understood or applied in combination with detailed oversight and rigorous fact-checking,” said the statement.
The case is believed to be the first time a Parliamentary committee has had to engage with the use of generative AI in inquiry submissions, which in Australia, as well as the UK, are covered by Parliamentary privilege and free from defamation action.
In a formal complaint to the committee, KPMG Australia’s chief executive Andrew Yates stated that the firm’s reputation was being unfairly undermined.
“We are deeply concerned and disappointed that AI has been relied upon, without comprehensive fact-checking, for a submission to such an important Parliamentary inquiry,” said Yates. “The livelihoods of the more than 10,000 people who work at KPMG [Australia] can be affected when obviously incorrect information is put on the public record – protected by Parliamentary privilege – and reported as fact.”
Unrefined AI tools
Back in the real world, KPMG has found itself hit with a string of disciplinary measures from the regulators over the past 12 months, including a record £21m fine for serious breaches in the audit of collapsed construction company Carillion, a £1m penalty for the audit of the high-street arts, crafts and books retailer The Works, and a £1.25m sanction for audit breaches in the financial statements of LED manufacturers Luceco plc.
Deloitte, meanwhile, ended 2022 with a £900,000 fine from the accountancy watchdog over audit failures of SIG plc and recently landed in hot water with the Ontario regulator after several employees altered computer clocks to “backdate” audits.
Given that many of the generative AI tools available in the current marketplace rely on the input of large volumes of data, it’s perhaps easy to understand how these relatively unrefined, generalist tools put two and two together and came up with 22.
However, for those responsible for checking its output and submitting it to one of the highest authorities in the land, it’s less understandable.