In association with
Save content
Have you found this content useful? Use the button above to save it to your profile.
robots typing | accountingweb | Can ChatGPT write content for your practice?
iStock_ilexx_robots_typing

Can ChatGPT write content for your practice?

by

With artificial intelligence developing at what feels like breakneck speed, will it ever be able to produce credible written material for your firm?

31st May 2023
In association with
Save content
Have you found this content useful? Use the button above to save it to your profile.

Artificial intelligence (AI) has dominated recent conversations about tech, with OpenAI’s ChatGPT taking the spotlight since its launch in November 2022.

The launch of GPT-4 in March brought us yet another stride forward, demonstrating greater capabilities across a range of tasks than its predecessors, from passing legal exams to synthesising visual inputs.

It still has its limits – and so far, it can’t actually do your clients’ taxes – but it’s not hard to imagine a range of other tasks AI could perform to relieve the workload for busy accounting firms.

One of those, which has been widely discussed, is its ability to write content for accountants’ websites, as well as marketing materials or basic communications.

As an editor at a digital marketing agency, you might expect me to have some reservations about that idea. But the reality is, generative AI is a powerful tool for content production, and its potential to assist and automate parts of the writing process can’t be ignored.

Writing high-quality, useful content takes time that not everyone has to spare. It’s certainly appealing to think that with a quick question and the click of a button, you could generate an article, website page, email or social media post, in a matter of seconds and at little to no cost.

The question is, can AI do the job for you just as well as a human can?

Understanding ChatGPT

Before trying to answer that question, it’s important to think about how the technology behind tools like ChatGPT works, and what it’s actually doing when it writes a piece of content for you.

I’m no software engineer, and I won’t pretend to be an expert on the technical details of generative AI, but here is a top-level explanation. GPT-4 and the rest of the GPT series are large language models (LLMs) trained on large volumes of text to produce an output based on the most probable series of words in a sentence. You could think of it as a more powerful version of your phone’s predictive text function, trained on information from across the internet.

Simpler language models have been around for some time, with early examples emerging in the 1950s, but a few things make these more recent developments different. A major difference is their size.

Size matters

Each iteration of GPT, for example, has been bigger and more powerful than the last. GPT-1 has 0.12bn parameters, while GPT-2 has 1.5bn. GPT-3 leapt up to 175bn parameters, while GPT-4 is rumoured to have a trillion. According to a paper by OpenAI, GPT-3 was trained on around 45TB of text data before filtering, from various online sources, including books, articles and websites.

Another important shift in the AI landscape has been the move towards more generalist tools, rather than those designed to complete a specific task. 

Is ChatGPT speaking your language?

For a number of years, language models were usually built and trained for specialist purposes such as speech recognition, image recognition or natural language processing, but were limited in their other uses. Now, tools like ChatGPT aim to perform a wide variety of tasks and respond to prompts on a range of topics.

Put together, this means that when we’re using ChatGPT, we’re giving a prompt to a generalist language-generation machine, trained on vast amounts of data to produce the most likely series of words in response.

As it turns out, machines that can effectively process the syntax and meaning of language are able to display an impressive range of other abilities – more than researchers in the field were previously able to predict.

But they are still limited. When ChatGPT responds to your prompt, it’s not thinking, reflecting or analysing a question for itself, or applying experience in the way a human does: it’s using statistical models and the data it was trained on to respond in a way that’s likely to be relevant.

Limitations of ChatGPT

Because of the way ChatGPT works, there are a number of drawbacks to consider. These include the following.

  • Accuracy issues: Even though ChatGPT is trained on a large amount of data, it’s not guaranteed to be accurate. To make things harder, it’s good at sounding convincing even when it’s wrong, and it doesn’t include citations for its sources. Another point worth noting is that it lacks data on events after September 2021, so it might not give you an accurate response on current or recent events.
  • Generic answers: As we’ve discussed, ChatGPT doesn’t apply actual knowledge or life experience like a human does. This can sometimes result in outputs that feel like stock answers, and lack the analysis or reflection of opinion-based content – or the emotional engagement of expertly crafted text.
  • Lack of tonal nuance: Similarly, AI can’t apply human levels of judgment to tone, which means it might miss those small nuances that make your voice distinctive. You can prompt it to some degree, but it won’t always hit the mark. This could be particularly risky when dealing with sensitive or emotive topics.
  • Plagiarism risks: ChatGPT’s responses are ostensibly unique, but because it draws on a pool of data for its training, there’s a chance it might produce wording that’s very similar to the source material. This is worth looking out for with niche topics in particular, where there’s limited information for it to draw on.
  • Bias: Another flaw in generative AI comes from something humans themselves are susceptible to – bias. Studies have shown ChatGPT to reflect harmful stereotypes relating to gender, disability and religion, for example, while other research has suggested discriminatory biases are “inherent” in the model.

As generative AI technology develops, we can expect tech companies to address many of these drawbacks. But for the time being, at least, it’s important to be cautious when using tools like ChatGPT to create any content you plan to share online.

At PracticeWeb, we’ve been testing out the capabilities of ChatGPT for content writing over the past few months, and we’ve found it’s most useful as a tool to support our work and augment the skills of our team. 

It can assist in planning by drawing up suggested structures for content, or simply act as a device to spark inspiration and combat writer’s block. But it’s never a total replacement for a writer, or a shortcut to creating quality content. 

Get in touch to talk about content for your practice.

Replies (7)

Please login or register to join the discussion.

avatar
By Hugo Fair
03rd Jun 2023 19:17

"With artificial intelligence developing at what feels like breakneck speed, will it ever be able to produce credible written material for your firm?"

Credible = Yes (it can do that - most of the time - now).

But ... reliable? / accurate? / inoffensive? = the jury is out (and it's looking less promising every day).

How about arguably the most important aspect of any form of marketing communications (or MarComms as it used to be known in my day)?
Whatever the style and culture of the organisation (and its products/service) being promoted, the key requirement in any professional environment will be to position those values in a way that engages with the minds (and hopes & other emotions) of your target audience.
Whether that's positive hopes & potential outcomes or negative snipes at the competition or just sowing the seeds of FUD ... it's all about understanding the humans at either end of the potential transaction.

Not all marketing achieves this optimally (or sometimes at all).
But you can guarantee with current levels of technology development that whatever the I in AI is perceived to mean by those pushing it, it does NOT include even a smattering of understanding.

Thanks (4)
Replying to Hugo Fair:
avatar
By ourpetsheadsarefallingoff
06th Jun 2023 10:05

Hugo, what do you think about the potential of an accountancy-tailored LLM to replace accounts senior roles in the coming years?

It seems plausible to me that a LLM could be trained on accountancy data to review a client's Xero output, query anything that looks unusual (based on comparatives and sector norms) in plain English with the client, and produce a set of "more correct" draft accounts for partner review. This would replace a large chunk of senior work.

Thanks (0)
Replying to ourpetsheadsarefallingoff:
avatar
By Hugo Fair
06th Jun 2023 14:04

Well ... the rate of change (and unexpected side roads) within technology makes any attempt at forecasting more than fraught with the usual uncertainties.

But ...
* Many groups are working on more 'focussed' LLMs, which has already identified two (fairly obvious) issues:
a) the need to keep all the input factual (in this case statute and case law), as the inherent lack of 'understanding' (still a human attribute) makes the results of 'interpreted' opinions highly unreliable;
b) the need to keep the source material (and subsequent 'reading' of it) up-to-date with great frequency (in this case not just changes in law but in Xero's data schema or I/O formats).

* Those are just the 2 most immediately identified 'shortfalls' that development as you envisage would need to accommodate/address ... but no doubt others will emerge (as will potential 'solutions' that have a tendency to bring with them other unexpected issues)!

Basically, you're quite right that there's a possibility of what you describe coming to pass ... but, unlike HMRC being prone to doing, it would be foolish to bet everything on the promise without some significant evidence of progress.

Thanks (2)
Replying to Hugo Fair:
avatar
By Hugo Fair
06th Jun 2023 14:12

BTW your 'proposal' contained two parts:

1. "query anything that looks unusual (based on comparatives and sector norms) in plain English with the client" and
2. "produce a set of "more correct" draft accounts for partner review".

1 - isn't beyond the capabilities of current developers (without any involvement of AI) ... it's just hard work them and they don't believe it would increase sales;
2 - is a leap too far (for me although possible with AI) ... as I trust the judgement of a human (who I can quiz if necessary), but not that of a closed-box machine.

Thanks (2)
avatar
By Robbine
05th Jun 2023 13:12

All that is okay but conversation of language quite different in this AI other context it is good.

Thanks (0)
avatar
By Robbine
05th Jun 2023 13:12

All that is okay but conversation of language quite different in this AI other context it is good.

Thanks (0)
avatar
By Robbine
05th Jun 2023 13:13

thank you for the information i will get my all answers .

Thanks (0)