All aboard the hype cycle of AIby
AccountingWEB’s editor at large, John Stokdyk, abandoned his stance of detached neutrality when AI bots started sending out promotional press releases.
When your stock in trade is to be a “seen it all before” cynic, it’s difficult to adjust to circumstances that you haven’t seen before, like global pandemics, or generative artificial intelligence (AI) that can create convincing simulations of human content.
The trigger point for my discomfort is the latest iteration of OpenAI’s language engine, ChatGPT-4. As we read recently, the bot achieved a pass mark in ICAEW accountancy exams and has successfully completed a range of law and medical licensing tests.
I can play the world-weary old-timer on this score. Chatbots featured heavily in AccountingWEB coverage as far back as 2016, when Unit4 launched its Wanda app. Sage responded with its bot Pegg, followed swiftly by Xero and QuickBooks. No business app was complete, it seemed, without a companion digital assistant. Yet where are they now?
The flash-in-the-pan experience is typical of the technology hype cycle devised by industry analyst Gartner, which currently shows generative AI to be nestling just below the famous peak of inflated expectations.
Breaking the cycle
But could the latest generation of AI confound the experts and disrupt established technology adoption patterns? The point at which my wry amusement curdled into existential dread was when I received a press notice from an organisation called Newsmatics proclaiming that it had created an AI-powered press release generator.
It’s not so much the potential loss of professional standing and employment opportunities that bother me, it’s the prospect of being swamped by mountains of bot-generated guff alongside the promotional deluge we already get from human PRs.
As many commentators have pointed out, ChatGPT and its ilk are not the founts of profound knowledge that Douglas Adams imagined with the Deep Thought supercomputer in Hitchhiker’s Guide to the Galaxy.
Instead, they digest huge quantities of digitised language and use machine learning pattern recognition to match recurring phrases that have been deployed in answer to similar queries before. ChatGPT-4 doesn’t “understand” the content it’s producing, it compiles a convincing sequence of words to meet the specified inputs. To this end, the superbot will occasionally invent its own bogus quotes and citations to make the content look more authoritative. Have a look at Bright Group’s ChatGPT-4 2023 Budget predictions post for an instructive example.
ChatGPT-4 may pass accountancy and law exams, but it can’t understand the client’s situation, interpret their desires and formulate a technically correct, ethically sound path for them… yet.
Returning to the press release generator, Microsoft has put $10bn into OpenAI and Google’s parent company Meta is also staking out its claim on this territory. Both companies search engines use machine learning to rank online search results. It isn’t hard to imagine these systems responding more positively to content produced from closely related source language models to crowd out more meaningful human insights. Thanks to generative AI, we now face the prospect of being deluged by a torrent of drivel on an incomprehensible, industrialised and self-perpetuating scale.
Sorry if that sounds apocalyptic, but the paranoia is based on more than 20 years’ experience of Google’s prejudiced search optimisation algorithms on AccountingWEB.
I’m not the only one to feel this creeping unease. In a recent rumination on FT.com, early mover and “State of AI” report author Ian Hogarth voiced his fears around the potential capabilities of what he calls “God-like AI” (or AGI – artificial general intelligence – as it is known in the trade).
Hogarth has been backing AI tech companies since 2014. Along with 1,800 other signatories including Elon Musk, Apple co-founder Steve Wozniak and scientist Gary Marcus, he put his name to a public letter calling for a six-month moratorium on AI development to assess the underlying risks and ethical concerns.
Shoggoth with a smiley face
Their main fear has been aired many times before that the speed of technology development in this area is racing ahead of social, environmental and regulatory responses. “Consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight,” Hogarth wrote.
The article is illustrated with a “Shoggoth” image, where the public-facing toy of ChatGPT-4 is represented as a smiley face being manipulated from behind by a giant, slobbering golem. The monster represents the giant technology companies that have absorbed the most cutting-edge AI developments into their growing empires.
There might be a hint of personal interest in my stance, but the information era has seen a marked increase in economic inequality in favour of global tech giants. The companies that prospered from this shift do not have the interests of wider society at their hearts. As well as unleashing all sorts of unanticipated consequences, my recurring fear is that ChatGPT and its AI descendants could become the vehicles for another, even more damaging wave of monopoly control and exploitation.
Editor's note: The paragraph on ownership of OpenAI has been corrected in response to the error pointed out by Paulwakefield1 below.
You might also be interested in
John Stokdyk sadly passed away in June 2023. He had been with the site since 1999, rising from news editor to editor in chief, global editor and head of insight. As a roving editor, he investigated the profession's use of technology around the world. He devoted his spare time to technology history and an oddball collection of stringed...