Save content
Have you found this content useful? Use the button above to save it to your profile.
New 2024 year progress bar on digital lcd display
iStock_amgun_AW_techpredictions

Tech predictions for 2024: All in on AI

by

Each year Bill Mew offers a few thoughts on the year ahead and predicts where he thinks technology will have an impact. He also seeks to expose over-hyped trends that will have far less impact than many expect.

2nd Jan 2024
Save content
Have you found this content useful? Use the button above to save it to your profile.

It would appear to many that the future of technology can be summarised in two letters: AI.

Unfortunately, artificial intelligence (the AI in question) is seen in almost equal measure as either a panacea that will transform everything and make us all far more productive, or as a cataclysmic threat that will replace all our jobs and then possibly kill us all. There is obviously a wide gap here in interpretation. So let's look more closely at how, who, where and why AI will have an impact over the next year.

How?

We are going to see a wave of innovation over the next year. On the hardware side, ARM-based M1 and M2 chips have recently enabled Apple computers to leapfrog Windows laptops in terms of processing power and battery efficiency. In 2024, we will see a new wave of ARM-based chips (from AMD, Nvidia and Qualcomm) that will match Apple’s M3 and redress the balance between Mac and Windows. The latest generation of chips will not only include advanced CPUs and GPUs (Computing and Graphical Processing Units) but will also have NPUs (Neural Processing Units) embedded. This will mean they will be ready to harness a new generation of applications on the software side that will all have AI integrated in some form.

Who?

The whole Open AI soap opera that played out across the global news channels shocked and confused many of us. It saw Sam Altman, the charismatic CEO of the runaway leader in AI, inexplicably sacked by his board. At the height of this furore, as most pundits were predicting that Sam and as many as 500 Open AI staff would be moving to Microsoft, I appeared on the BBC saying that “the Microsoft move is not a done deal” and “don’t be surprised if you see Sam back at Open AI.” 

The problem was that the board had an altruistic remit to develop AI for the good of humanity, while the CEO and investors were more focused on profit. The clash between altruism and commercialism was hardly a fair contest. As I had predicted, the board was replaced, Sam returned as CEO and Microsoft (the largest investor) now has a presence on the board to prevent it from ever happening again.

Nevertheless, the saga shook confidence in Open AI, making it clear that it is not a one-horse race and opening the field up both to new rivals like Anthropic, as well as to other tech giants like Salesforce with its Einstein AI.

Where?

While many of us will soon be using these big AI platforms, this is not our only option. You can run your own Large Language Models (LLMs) on a far smaller scale and still apply AI. This can be done in your own data centre or even on a single NPU-enabled laptop. Indeed, for particularly sensitive workloads, private AI will be essential. Large data sets that include private, confidential or secret information will need to be processed on your own hardware or on a sovereign cloud rather than on a public cloud (where Edward Snowden revealed that mass surveillance by the NSA is rife). 

If Genomics England, the NHS or even HMRC wish to gain insights from the treasure trove of data they hold, then they’ll need to do so without compromising our private information. Unfortunately, despite the Prime Minister’s enthusiasm and the AI summit that he held at Bletchley Park, there are financial and cultural constraints, complex procurement processes, and a long list of stakeholders that are all hampering ambitions to exploit AI in the public sector. The reality is that the UK public sector is hardly ready to exploit AI, but 2024 will see far more rapid adoption in the private sector.

Why?

There will be two broad areas in which AI will be applied: general AI which will focus on productivity; and transformational AI which will focus on creativity. 

Many mundane tasks will either be automated in areas like data entry and scheduling or will be augmented in areas like budgeting, forecasting and tax planning. In many of these areas, jobs will be lost in significant numbers. This will change the profile of the accountancy profession as there will be fewer junior roles available, where accountants have traditionally gained experience. 

We will also see jobs that pay wages and taxes replaced by computers that pay neither. Efforts to crack down on multinationals that shift profits overseas and avoid tax will need to be improved if the tax base is to be protected to support the cohort of people that will need to be either retrained or compensated as traditional job roles are lost.

These productivity gains will not provide any competitive advantage though, as all competing firms will adopt the same AI. Only those that fail to do so will be left behind. Instead, competitive advantage will come from transformational AI where instead of improving the way that we conduct traditional tasks and processes, AI will enable us to rethink and reform processes entirely, thus enabling entirely new forms of value creation. 

In the last wave of technology, some organisations used the move to the cloud to refactor applications, rethink processes and enable greater integration and interoperability. They gained a competitive advantage over risk-averse rivals that remained on legacy systems or simply lifted and shifted workloads to the cloud without attempting any real digital transformation.

In the next wave, even greater investment and risk-taking will be required, for even greater potential long-term gain.

What else?

AI won’t be the only game in town, but it will become pervasive, just as the interest did before it. Just as almost all devices now have internet connectivity, devices in the future will all incorporate some form of intelligence and this will be enabled by the NPUs that will become standard in all new chips.

The cryptocurrency bubble may have burst, with Sam Bankman-Fried heading to jail and Changpeng “CZ” Zhao, founder and CEO of crypto exchange Binance pleading guilty to a money laundering charge and stepping down. The metaverse balloon also looks to be deflating, with VR headsets only gaining traction in gaming and Meta cutting 11,000 jobs as it ditches the metaverse and pivots to AI. Other trends will also come and go, but AI is here to stay and its impact will be widespread.

Is it out of control?

As for the AI apocalypse where the tools become sentient and a threat to humanity, I don’t believe we have much to fear from the machines themselves. They may well become super-intelligent, but we are a long way from seeing any form of sentience. The risk of more powerful disinformation campaigns, deadlier biological weapons, and more effective planning for social control will not come from the machines themselves, but from malicious humans that gain control of the machines. 

That said, we are seeing a move to a technopolar world, where governments have less control. It is arguable that the tech giants are already beyond the control of governments. Regulators have proven ineffective at enforcing GDPR on Big Tech, just as tax authorities have been ineffective at making them pay their fair share of taxation. Cybercriminals are also operating beyond the reach of the law, and soon we will also see the emergence of massive AI systems that are almost impossible to understand and very hard to regulate. 

In early 2021, long before ChatGPT hit the headlines, Sam Altman, the former and now new CEO of Open AI, self-published a manifesto of sorts, titled “Moore’s Law for Everything.” His new version of Moore's Law saw the amount of intelligence in the universe doubling every 18 months. He has been proven wrong. AI models are accelerating far more rapidly than this.

Replies (7)

Please login or register to join the discussion.

avatar
By Justin Bryant
02nd Jan 2024 12:08

One thing I've learned is that AI does not tend to make spelling or grammar mistakes, so at least I know the above article was written by a human (unless a deliberate typo or two has been thrown in for that reason).

Thanks (1)
avatar
By FactChecker
02nd Jan 2024 14:18

"Large data sets that include private, confidential or secret information will need to be processed on your own hardware or on a sovereign cloud rather than on a public cloud (where Edward Snowden revealed that mass surveillance by the NSA is rife)."

Or in other words, if you're concerned about threats (from anyone who doesn't regard your best interests as paramount - whether competitor, personal or state enemy, or mere criminals) ... then focus not on AI but on what you're *already* doing.
Relying on 'the Cloud' for all your data storage and processing is akin to leaving your doors and files completely unlocked - the only difference being that, most of the time, you won't even know when you've been 'burgled' in the virtual world!

Thanks (3)
avatar
By FactChecker
02nd Jan 2024 14:29

Bill, your concluding section is chilling (and I believe accurate):

"The risk of more powerful disinformation campaigns, deadlier biological weapons, and more effective planning for social control will not come from the machines themselves, but from malicious humans that gain control of the machines.
That said, we are seeing a move to a technopolar world, where governments have less control.
Regulators .. (and) .. tax authorities have been ineffective .. (and) .. Cybercriminals are also operating beyond the reach of the law."

Do you have any thoughts that might hold out hope against "the emergence of massive AI systems that are almost impossible to understand and very hard to regulate"?

Given that "the tech giants are already beyond the control of governments", we appear to be facing not the perils of AI but the dystopia of trans-national dictatorships where control of our lives (indeed our right, or not, to live) resides in the hands of the unelected and faceless bods behind AI.
Or, to put it more bluntly, AI is merely the 'enabler' (aka weapon of choice) for those determined on personal control of others - akin to the use of nuclear weapons as a deterrent on a wider scale.

Thanks (2)
Replying to FactChecker:
avatar
By JustAnotherUser
02nd Jan 2024 15:34

even more so chilling with the amount of elections going on globally in 2024..

https://www.reddit.com/r/MapPorn/comments/18v6ch6/2024_a_worldwide_elect... all being shaped and nudging billions of people in the direction needed by a few

Thanks (1)
Replying to FactChecker:
avatar
By Rob Swan
04th Jan 2024 11:16

'Western' governments have little control over AI, but 'other' governments and criminals seem to have plenty - just a thought.

Thanks (1)
avatar
By Rob Swan
04th Jan 2024 11:30

Thanks to Ben Mew- great article.
Two points in reply:
1. Bad actors are, I think, one of the biggest risks posed by AI, as Ben points out. So, in reality, that's a 'human' threat.
2. AI has been a bit of a 'new shiny thing' in '23 and I wouldn't be too surprised if, at some point, it either fails to deliver (against over-hype), or something goes badly awry and AI suddenly becomes a huge 'hot potato'. (eg. Failing to filter out 100s of fradulent invoices/payments or something of that sort. Bad actors again!) Relying on AI for tax, as an example, seems way too high risk - the AI doesn't carry the can if it gets things wrong; you do. And tax is ever changing and increasingly complicated.
Just thoughts - from a flawed human ;)

Thanks (0)
avatar
By johnjenkins
09th Jan 2024 16:46

The problem will be too much reliance on AI when there will be no alternative.
How many of us actually check each item as it goes through a checkout in a supermarket, et al so that we know we have been charged the right price.
OK if our bill is £100 more than we thought we will pick it up but if it's only a few quid?
I bet there's loads of cases where the bar code comes out at a different price to what it should.
Now translate that into AI software that has no alternative and put it amongst Joe public with no computer training. 78% of crypto currency is a scam. Just think how AI could be manipulated.

Thanks (1)