Save content
Have you found this content useful? Use the button above to save it to your profile.
Lady justice on digital background (Concept of artificial intelligence lawyer)

Rethinking regulation and taxation in the age of AI


Few technologies promise to be as disruptive as artificial intelligence. Here Bill Mew considers the wider societal implications that might include the need to completely rethink regulation, taxation and a great deal more besides.

14th Jun 2023
Save content
Have you found this content useful? Use the button above to save it to your profile.

Everyone is talking about artificial intelligence (AI). Much of the debate has centred on how AI might lead to the end of the world – or at least might lead to the loss of many jobs in a host of areas, including white-collar professions like bookkeeping and accounting. 

The immediate reality is somewhat different. There is no real threat of extinction from the current generation of AI tools as they lack agency. On top of this, most jobs are still relatively secure as the current generation of Large Language Models (LLMs) are still fairly inaccurate and, while they are good at providing very plausible answers, these cannot yet be relied on for accuracy. However, like any other new technology, AI will continue to improve until it starts to have a real impact, but how great will this be? 

Employment changes

A report from Goldman Sachs estimates around 300m jobs could be affected by generative AI, meaning 18% of work globally could be automated, with advanced economies more heavily impacted than emerging markets.

Researchers from the University of Pennsylvania and OpenAI reveal that around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while 19% of workers may see at least 50% of their tasks impacted. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure, meaning that some educated white-collar workers earning up to $80,000 a year are among those most likely to be affected by workforce automation.

While embracing new technologies is essential, it does not come without impact. On the job front, there is a pattern that has been repeated with all technology revolutions. There are three cohorts: 

  1. those that get the technology and who thrive with its use
  2. those who could get the hang of the technology with some help, education or support 
  3. those who will never get it and will be left behind.

While enabling the first group and educating the second is important, the biggest challenge in all previous technology shifts has been how to compensate the last group – the ones that will be disenfranchised.

In previous technology shifts it has been workers in agriculture or manufacturing that have been most heavily impacted, but this time round many higher-paid white-collar roles will be affected instead. Highly skilled roles that require science and critical thinking should be safe, but those requiring basic accounting, programming or writing skills will be easily automated.

There will also be knock-on effects. As employment opportunities for paralegals or bookkeepers disappear, the progression path to more skilled legal and accountancy jobs will change dramatically. And while performance management in white-collar roles may previously have put the bottom 10% of performers at risk, automation could put all but the top 10% at risk.

If you are not in that top 10% and your scientific or critical thinking skills aren’t significantly greater than all your peers, then your own role might well be at risk.

If that’s not scary enough then there’s an even bigger challenge with this technology shift — that of regulation and taxation.

Creating regulations

We have seen with GDPR that it is not all that hard to create regulations for intangible aspects of data use or technology, but it can be almost impossible to enforce these regulations (as we have seen with trying to hold Big Tech firms to account for various privacy abuses).

The challenge with creating and then enforcing regulation on AI will be far greater. Even the leading experts in AI admit that they cannot always tell how answers are generated or predict what the results will be. If you are going to use AI for business-critical systems then how do you think you’ll be able to ensure that it complies with any regulations or even tell if it is actually doing so? 

It is not just being able to eliminate biases or enforce regulations – you will also struggle to be able to tell if your system has been compromised or if it is being manipulated in some way. And it is not just inefficiencies and mistakes that you need to worry about, but also external threats from cyber criminals as well as internal ones from rogue employees.

And if individual businesses are not in complete control then how do you expect under-resourced regulators to regulate or control the market as a whole?

Erosion of taxation

The AI revolution will see jobs that pay wages and taxes replaced by AIs that pay neither. 

E-commerce led to an initial wave of automation and off-shoring, and cloud computing has had a similar impact, but AI will lead to a far greater erosion of jobs, wages and taxation. The immediate post-pandemic cycle saw a shift in the power balance from capital to labour as we saw an environment of almost full employment and rising wages in which employers had to tempt people to fill job vacancies. In a few years’ time, AI will shift the balance significantly in the opposite direction.

In previous technology shifts, we have always counted on being able to support those that have been disenfranchised by any revolution. Wages and taxes from any new jobs created have been used to fund programmes to support those left behind. This time round, however, only a limited number of new jobs will be created, while a far higher level of value creation will be automated and taken offshore.

We have already struggled to get the technology giants to pay tax – Amazon’s main UK division paid no corporation tax at all in 2022 or 2023. And a “Google tax” introduced by the coalition government to crack down on multinationals shifting profits overseas has been a total failure. It was predicted by officials that the tax would raise up to £400m a year, but recent figures show that revenues have slumped to zero, nor is it expected to raise any money in the years ahead either.

Often in the past, there have been predictions that jobs would be hollowed out and with them the traditional tax base of many Western economies. When much of our manufacturing was hollowed out in this way, white-collar jobs in service industries filled the void. This time it is those white-collar roles that are under threat and there will be little to replace them.

Heavily indebted governments will be under greater pressure than ever to rethink the way that they tax value creation and to shift the tax burden further from labour to capital. Any initiatives like wealth taxes, will need to be far better thought out though than the botched “Google tax” was.

While Big Tech will be swept up in the biggest shake-up of cross-border tax regulation in years, the proposed Beps 2.0 reforms from the Organisation for Economic Co-operation and Development (OECD), it is not yet clear if this will be any more effective than Beps 1.0 in ensuring that they pay their fair share of tax. The OECD has estimated that profit-shifting practices already cost about £190bn a year worldwide in lost tax revenue.

How to survive

In offices across the country, just like yours, professionals will be asking themselves “How can I protect my job from the impact of AI?” and the answer for most will be that those who are quickest to master the use of any new technology and to exploit it to best advantage, will be most likely to survive.

And in the capitals of all Western Economies, policy wonks and politicians will be racing to regulate AI, while probably failing to consider how any regulation can be effectively enforced.

Meanwhile in HMRC and the Treasury, if “taxAItion” isn’t already a big topic of debate, then it soon will be.

Replies (12)

Please login or register to join the discussion.

By JustAnotherUser
14th Jun 2023 15:50

We definitely need to regulate the social side of AI, were not far off having the ability for a teenager being able to use cloud computing to generate a realist video, with realist audio of a celebrity or politician doing and saying anything they want them to... (dare to say we are there now)

If we look the influence in recent years from Cambridge Analytica and Brexit, Trump, Boris etc, this was all done pre the current AI era..

We need to regulate this maybe like how product placement ads have to tell you, or create legalisation on how AI can be used to copy someone's likeness, I don't have all the answers of course .

Thanks (0)
By Tornado
14th Jun 2023 16:15

Successive Governments fail to accept that there should be a change in the operation of taxation and social security as it is plain for all to see that they are both taxes, administered by the same authority and it would make a great deal of sense to amalgamate the two and have a simpler combined system.

I believe that no Government has even suggested this as it would mean a massive number of redundancies in the Civil service as one complete department is effectively no longer required. Political suicide to any party that did this.

Put this situation to AI and I think it would come to the same conclusion but I would like to see what it would say about the massive social disruption of people being made redundant without a hope of getting another job and having no income to live on let alone spend. Also no income to tax.

A basic law of economics ( I seem to remember) it is that it is not the amount of money there is in an economy that is important, but the velocity of that money .. how quickly is circulates from person to person and business to business, each time creating something that was not there before. People saving money brings on stagnation but spending money creates great activity (HS2) and keeps the economy going.

Will AI be able to balance the need for people to work or to 'produce' and earn to spend and the potential savings of not using people to do things. This, in my opinion, is why the logical solution is not always the best, or will AI not be able to recognise these essential human features of an economy and plan accordingly.

Unless AI is programmed with all the emotions and faults of humans, then inevitably it will fail to understand us and our non-logical way of thinking and will either eliminate us from its thought processes, or will be eliminated by us when we pull the plug (that would take it by surprise).

What is so bad about the way we think as humans that AI thinks it can do better.

Thanks (1)
Replying to Tornado:
By JustAnotherUser
15th Jun 2023 10:28

What is so bad about the way we think as humans that AI thinks it can do better.... ( I asked it and it replied....)

As an AI, I don't possess personal opinions or beliefs, but I can provide some insights on the topic.

When AI proponents discuss the potential benefits of artificial intelligence, they often highlight certain areas where AI systems may outperform human thinking. Here are a few aspects where AI may have advantages:

Processing power: AI systems can quickly analyze vast amounts of data and perform complex calculations with precision and speed, often surpassing human capabilities. This allows them to identify patterns, trends, and correlations that might be difficult for humans to detect.

Lack of bias: Human thinking is prone to various cognitive biases, such as confirmation bias, anchoring bias, or availability bias, which can influence decision-making and judgment. AI, when properly designed and trained, can be programmed to minimize or eliminate biases, leading to more objective and consistent outcomes.

Consistency and reliability: Human thinking can be influenced by emotions, fatigue, distractions, or other external factors that can impact decision-making. AI systems, on the other hand, can operate consistently and reliably without being affected by such factors, leading to potentially more accurate and dependable results.

Speed of learning and adaptation: AI algorithms can learn from vast amounts of data and adapt their models quickly. They can identify patterns and make predictions based on a large sample size, which can lead to better decision-making in certain domains.

However, it's essential to recognize that AI systems also have limitations and potential pitfalls. They lack certain human qualities, such as creativity, empathy, and common sense reasoning, which can be crucial in many complex situations. Additionally, ethical concerns, interpretability issues, and the potential for unintended consequences are areas that need careful consideration when developing and deploying AI systems.

It's important to note that the goal of AI is not necessarily to replace human thinking entirely but to augment it by leveraging the strengths of both humans and machines. By combining human expertise, intuition, and moral judgment with AI's computational power and analytical capabilities, we can potentially achieve more effective and well-informed decision-making across various fields.

Thanks (0)
Replying to JustAnotherUser:
By Tornado
15th Jun 2023 10:44

Basically what I said.

The danger here, though, is us thinking that this has been written by a human or is some sort definitive answer when it is actually just a lot of information harvested from the internet and put into some sort of logical order.

It would be really interesting to see what happens if this answer was given to a completely different AI program to review and comment on. It might give a different answer or indeed, offer an opinion as the accuracy of the original answer.

Thanks (1)
Ivor Windybottom
By Ivor Windybottom
15th Jun 2023 10:36

It seems that transaction taxes like VAT will become increasingly important to help fund the displaced knowledge workers.

The benefit of taxes like VAT are that taxation cannot be as easily dodged as by offshored corporate profits as the consumer pays. It may even require direct payment to HM Govt, as has been considered, but it will be a game of catch-up.

Government always moves too slowly - for example, as electric cars become dominant there is a significant loss of fuel duty to pay for the roads, etc. yet no model for collecting an effective "road tax" is in place yet.

Thanks (0)
Replying to Ivor Windybottom:
By Tornado
15th Jun 2023 10:58

"Government always moves too slowly - for example, as electric cars become dominant there is a significant loss of fuel duty to pay for the roads, etc. yet no model for collecting an effective "road tax" is in place yet."

I think you will find that charging points, both private and commercial, will already have facilities to measure the amounts of electricity being used as fuel, which will allow the electricity supplier and/or Government to add fuel taxes in the future. Fuel for electric motoring may be cheap at the moment but I think it will be an important source of taxes in the near future and the Government are well on top of this.

After all, prior to the energy crisis, who would have thought that a smart meter could be converted to a prepayment meter almost instantly remotely by the Electricity Supplier and they probably have a number of other hidden 'features' as well.

You do not require AI to think up these ideas, but AI might well give you an honest and detailed assessment of what a smart meter can do if you ask it, or indeed, what else a car charging point can do that we do not yet know about. Being able to only provide a factual and detailed honest answer could be quite beneficial to us.

Thanks (0)
Replying to Tornado:
By kjevans
15th Jun 2023 14:31

Maybe, but as a developer mate of mine put it, what these LLMs do is not what we think it is. It is not, "Tell me the answer to this question", it is, "Show me what the answer to this question might potentially look like". So just as biased as a human, and capable of what a human would call "lying" and, as we have seen, plausible rather than accurate. Yep, it's the end of the world as we know it.

Thanks (1)
Replying to kjevans:
By johnjenkins
15th Jun 2023 14:47

So by your analogy the answers are irrelevant. Question "Does Putin have the potential to start a nuclear war" answer "yes". Now that would be the end of the world, as we know it Jim.

Thanks (0)
By johnjenkins
15th Jun 2023 12:20

AI is the wrong title. It is neither artificial, it's real, nor intelligence, it can't think.
So how about MCM, mega calculating machine. This new title will bring what it is into how we should use it. I'm sure you can program it to paint exactly like Da Vinci, but it won't have that little bit of magic .

Thanks (0)
By moneymanager
15th Jun 2023 15:09

"On top of this, most jobs are still relatively secure as the current generation of Large Language Models (LLMs) are still fairly inaccurate and, while they are good at providing very plausible answers, these cannot yet be relied on for accuracy."

I know a university professor in this arena, he jets around stitching together teams of all manner of disciplines including "behavioural engineers", I'm not sure whose behaviour would be engineered but it sounds dystopian. I a recent conversation he said that many had rasied the same concerns or even just issues that I did and concluded by assuring that "they are just tools", I asked, "Who's tools?".

Thanks (1)
15th Jun 2023 15:30

We are safe for the moment as parents. Tried it out on a simultaneous equation question for daughter revising GCSE maths. It could not come up with the correct answer no matter how one asked or varied the question.

Of course this leads to my basic view of all these things, technology is good when it is used to support the professional rather than trying to replace them.

Thanks (1)
Replying to JD:
By johnjenkins
16th Jun 2023 10:10

"Support rather than replace". That is the key. Let's hope the over zealous don't forget that.

Thanks (0)