HMRC, DWP and PO struggle with complexity and factby
Recent mistakes by HMRC, DWP and the Post Office have had significant consequences. Bill Mew investigates systems of record reliability, and how it could get worse.
A series of recent mistakes by major government departments have had significant consequences. HMRC has been pursuing taxpayers for penalty notices that they have not received, DWP is having a problem with duplicate records and dozens of Post Masters have even been jailed on the basis of faulty evidence.
Accountancy is based on the assumption that there is a single authoritative reference point from which a single version of the truth can easily be derived. On top of this, there should be a log of entries and changes to provide a level of accountability – a system of record.
While the term ‘system of record’ has been confused or abused to mean any legacy system, it actually refers to a system where original data is entered or recorded, and that provides an authoritative reference point for other systems.
Historically, written ledgers were systems of record. Once entries were made, they then became a matter of record. Restrictions were made as to who could make entries into such ledgers and those doing so would be held to account for mistaken data entries or calculations.
Systems of record should also not be confused with systems of engagement. These are the systems that interpret and represent the data as insights and analysis in dashboards, sales support systems, or management information systems.
On this basis, while ledgers and accounting records would be systems of record, the tax systems that use such data to derive tax records based on an interpretation of tax policy would be systems of engagement.
Complexity and reality
All of this is great in theory, but in practice, things are never quite as simple. Some significant recent incidents with key government systems have shown how problems with complexity can have terrible real-world consequences:
1) Processing errors and HMRC’s misdirected penalty notices
Increasingly we are relying on automated systems that are anything but infallible. Programming errors that result in miscalculations are uncommon, but mistakes still occur regularly. DWP’s universal benefits system is particularly renowned for such errors.
Errors can also occur when physical tasks are automated, such as with print runs that produce unintelligible documents or where automated envelope stuffing machines insert several letters into a single envelope.
HMRC has come under fire for numerous data breaches via penalty notices issued over the years. The errors question HMRC's claim that once a penalty notice has been ‘issued’ then it must have been received by the intended recipient.
Evidence of such errors has undermined HMRC’s standard assertion in tribunals that the fact that their system posted the notice, and it was not returned, means that it must be presumed to have been validly served on the taxpayer.
2) Logs, accountability and the Post Office Horizon system
Access to systems of record needs to be restricted. Accurate logs need to be kept of all entries and changes, along with who made them – providing confidence in the accuracy of record-keeping as well as accountability.
In a startling miscarriage of justice, 39 Post Office convictions were recently quashed after evidence provided by Fujitsu about the infallibility of its Horizon IT platform was called into question.
A number of Post Office staff had been convicted and sent to jail on the basis of logs provided by the Fujitsu Horizon system as evidence of wrongdoing. The system was meant to be infallible and all entries were meant to be logged and traceable until it was found that the logs could be bypassed.
3) Multiple version of the truth and DWP’s data warehouse issues
Managing data warehouses and establishing a single version of the truth is a common issue with many large organisations and government departments. Distributed computer systems with multiple data entry points often have multiple data pools, leading to duplication or proliferation of records.
This problem can occur where, for example, multiple government departments use your national insurance number as a unique identifier, but don’t have integrated systems. If you update your address on one system then it is not automatically updated on the others and the government then has you registered at two different addresses and doesn’t know which record is correct.
It also happens within departments, where database integration has been inadequate. Aside from its well-documented difficulties with the universal benefits system, the Department for Work and Pensions has been trying for years to move off Oracle Enterprise Data Warehouse in pursuit of a single version of the truth.
We have a big problem
All accountancy relies on a valid set of books from which accounts are derived. Three conditions are required:
They need to be valid/truthful,
any changes need to be logged accurately and the identity of the person making changes needs to be known for accountability, and
there needs to be a single version of the truth.
The recent examples above show that there are massive problems in all these areas – and that they are not only relatively wise-spread, but they can also have really serious consequences.
And it's only going to get worse
Artificial Intelligence (AI) has been heralded by some as a means of automating error-checking as well as spotting anomalies that could uncover potential fraud. In reality, AI is a double-edged sword. Automated systems can be used to spot and correct human error, but as it starts to be used to make changes to systems of record, how can AI be held accountable?
How do you know if rather than correcting human error, an AI system is making mistakes of its own? And when it makes mistakes, who is accountable? The vendor that developed the system, the one that implemented it and set it up, or the team that has maintained it ever since?
How do you know if the mistake was caused by an isolated but unfortunate error, or a fault that could reoccur, or even worse by malicious actions by either your own staff or by outsiders?
Hackers are already seeking to gain access to AI systems in order to game them or manipulate them? The big problem with such automated systems is actually knowing if or when they’ve been hacked in this way?
Realising that your AI system has been hacked might not be enough. If there have been numerous automated changes across numerous interrelated records then unpicking it all and establishing an accurate audit trail could be impossible – especially if hackers cover their tracks by deleting or amending logs.
Systems of record are absolutely essential and AI may well be an ‘intelligent’ enhancement, but neither of them is infallible. 100% reliability for automation is a bit like 100% protection for cybersecurity – both are myths. You have been warned.
You might also be interested in
Founder and CEO of CrisisTeam.co.uk (SiliconANGLE global Startup of the Week – May 2019), an elite team of experts in incident response, cyber law, reputation management and social influence that help clients minimize the impact of cyber incidents. Previous cloud strategist at UKCloud (the...