Save content
Have you found this content useful? Use the button above to save it to your profile.
Homebuilt robot standing in front of cash machine
istock_kamisoka_AW_robot

What happens to KYC when AI can spoof biometrics?

by

Will a new generation of generative AI tools make the security of biometrics obsolete and sweep away existing ID verification technology? Or are there ways to ensure the person you’re talking to is the real deal?

27th Jun 2023
Save content
Have you found this content useful? Use the button above to save it to your profile.

From criminal fingerprints in the 1880s to the Face ID launch on iPhones in 2017, the use and acceptance of biometrics for authentication has become standard practice. What once was a feature in action films and Bond adventures has infiltrated itself into real-world applications for the everyday user.

Biometrics are also used in Know Your Customer (KYC) checks via Electronic Identification Verification (EIV) tools. Accounting firms are required to meet KYC requirements as part of their anti-money laundering (AML) obligations, with a KYC check needing to be carried out for any new client or if business relationships change. This obviously adds to the workload of busy firms, and for convenience, cost and security, biometrics and EIV represent an attractive way to facilitate this process.

But there’s a new prop that a bad actor can use to spoof biometrics: generative AI. The internet is becoming packed with content featuring deepfakes and spoofs of celebs, for now, mostly used in a humorous context. Yet its potential represents a growing threat for accounting firms in verifying their customers accurately and confidently.

As biometrics continue to rise as a verification technology, what if AI becomes powerful enough to render biometrics nearly useless? And what steps can firms take to address these threats with their KYC processes?

The biometric takeover?

There are still some limitations to generative AI, for now. Fingerprints and retina scans remain too unique with too few datasets for the AI to learn from. But, as the internet is showing, AI can deliver synthetic voices, faces and even proof of liveness.

As the EIV market rides a significant growth trajectory, there’s a wealth of data on both highly sensitive and everyday biometric authentication. It's increasingly feasible that bad actors can hack this data and use generative AI to create ‘synthetic biometrics’. With this approach, realms of possibility open up for using this data to adopt fake identities and forge fake documentation.

The pass or fail threshold with a biometric check boils down to how closely the original item matches the one being presented. So, there is ample room for error. Voice ID, for example, has become a routine tool used by banks, and even HMRC uses voice recognition technology. But as the music world is currently demonstrating, with the likes of The Beatles and Drake, AI can replicate voices increasingly accurately - in fact, just this year a tech journalist used a synthetic voice to hack into a bank account. 

If this is already the case now, who knows what tools bad actors could be using in a few years’ time? While legislation and regulatory bodies try to keep up, the onus has been placed on firms and human minds to mitigate the generative AI threat.

How could synthetic biometrics outfox KYC?

Placing EIV checks in this framing, how do the current KYC processes in accounting firms open themselves up to synthetic biometrics? Often firms will be required to collect relevant contact information and photo ID documents, using Face ID technology to confirm if a person matches their ID. For generative AI, the ample room for comparison leaves plenty of opportunity in this situation to pass the test.

With enhanced or more complex cases and entities, accountants are required to not only collect all of this standard identity documentation but also information on a client’s sources of funds and wealth. These could be bank statements, bank accounts or company records. For generative AI, producing a fake bank statement will be no bother at all. That, combined with a deepfake face or voice, means bad actors could effortlessly outfox standard EIV.

Does this generative AI future make the security of biometrics obsolete? Or are there ways to ensure the person you’re talking to is the real deal?

Taking on a zero-trust approach

Even if we don’t reach a fully-fake world in the foreseeable future, where no one can trust anything, it’s likely we could live in a hybrid world where lesser forms of deep fakes are used by bad actors to bypass the EIV process. For the accounting sector, it means professionals have to remain on high alert and not take initial identity documents at face value.

Firms could adopt processes that mean clients have to respond with a video call and show an item, such as a company letterhead, or write down a phrase on a piece of paper, to prove their 'liveness'. This could form part of a three-factor authentication process that includes liveness, voice and face recognition. For bad actors, one is easy to spoof. Two? It becomes harder. And three? Currently, virtually impossible.

While businesses can take on more stringent and robust measures, compliance also requires a wider public and cultural shift. Just as biometrics has become an accepted form of security, people need to become increasingly self-aware that what they see and hear may not be real. This can be conveyed to clients with guidance and advice. Clients should never accept an anonymous request for their identity, for example, and make sure to check emails are from verified sources.

It’s all about building a zero-trust environment that becomes common ground for both companies and their clientele.

People are the biggest form of defence

As the AI race continues its march forward, biometrics and EIV, as we know them, could be under new forms of (fake) attack by bad actors. For accountants, each transaction, each new customer, and each new business deal, has to be joined with a zero trust and re-identify mindset. The KYC and Customer Due Diligence processes have to evolve to act in real-time even more so, with verifications accepted at specific points of time. 

Technology is brilliant at eradicating mundane and time-consuming tasks, such as document collection, report collation and workflow optimisation. But it has to come back to using human intelligence and communication to perform identity checks and ensure compliance is carried out robustly. People are becoming harder to verify. Yet it is actually people themselves who offer the biggest form of defence to ‘out spoof’ the bad actors faking it with generative AI.

Replies (4)

Please login or register to join the discussion.

avatar
By Justin Bryant
27th Jun 2023 16:05

Since over 99% of KYC/AML (AKA pointless box ticking) is already pretty ineffective and pointless (apart from helping to avoid AML penalties for non-compliance etc.), not much is the short, simple answer here.

Thanks (2)
Replying to Justin Bryant:
avatar
By Hugo Fair
27th Jun 2023 21:28

Not much indeed ... or absolutely nothing.

You hear about all the fines (and worse) for apparent non-compliance with one or more of the procedures ... but I can't recall a case of penalties where the procedures were compliant but failed to identify an impersonator.
Given the volume of Identity Theft, the banks could be out of business otherwise?

Thanks (1)
avatar
By moneymanager
29th Jun 2023 09:44

"Will a new generation of generative AI tools make the security of biometrics obsolete and sweep away existing ID verification technology?"

" Or are there ways to ensure the person you’re talking to is the real deal?"

When I met my now wife she was bemused at such as the anti-money laundering regualtions I had to go through and told me of her perosnal experience, she had gone to her bank to open a second account and was asked to provied the now essential ID, "But you've known me for over twenty years".

The most useful form of technology is the Mark I eyeball, we seem to be behaving like a dog chasing its own tail.

Thanks (0)
By Duggimon
29th Jun 2023 10:11

I'm not sure how generative AI can infiltrate biometrics to the point that the person standing in my office holding their passport might not be real.

Thankfully, having never adopted biometrics, its approaching obsolescence doesn't pose a threat.

Thanks (0)