THE NEXT FIVE
THE NEXT FIVE - EPISODE 40
The Trust Gap: Fraud vs Reality in the Age of AI
With machines now able to mimic your identity and warp reality, how can we rebuild trust in our digital systems?






































The Next Five is the FT’s partner-supported podcast, exploring the future of industries through expert insights and thought-provoking discussions with host, Tom Parker. Each episode brings together leading voices to analyse the trends, innovations, challenges and opportunities shaping the next five years in business, geo politics, technology, health and lifestyle.
Featured in this episode:
Tom Parker
Executive Producer & Presenter
Hubert Behaghel
CPTO, Veriff
Gareth Murray
Financial Crime Senior Director, Monzo
Simon Miller
Director of Policy, CIFAS
According to the World Economic Forum's Global Cybersecurity Outlook 2026 report, we've hit a historic tipping point. For the first time, CEOs now rank cyber-enabled fraud as their number one digital concern, officially overtaking ransomware.
The threat is no longer about a hacker locking your files, it's about a machine mimicking your identity. But why does this matter? Well, because when trust collapses, the economy stops. If you can't verify who you're paying, you stop paying. If you can't verify who you're hiring, you stop growing. We are no longer just defending data. We are defending the very possibility of digital trust. Today we're discussing the rise of digital ghosts, the death of the voice call and the defensive technologies emerging to save our digital identities, economies and society itself. Joining Tom Parker to examine the future of fraud in our digital world are three experts, Hubert Behaghel, CPTO at Veriff, an identity verification software company, Gareth Murray, Financial Crime Senior Director at Monzo and Simon Miller, Director of Policy at CIFAS
Sources: FT Resources, WEF, KBV Research, Mordor Intelligence, Juniper Research.
This content is paid for by Veriff and is produced in partnership with the Financial Times' Commercial Department. The views and claims expressed are those of the guests alone and have not been independently verified by The Financial Times.
READ TRANSCRIPT
- Tech
Transcript
The Trust Gap: Fraud vs Reality in the Age of AI
[Music Up]
SOUNDBITES
We lose 219 billion pounds a year in fraud. 14 billion pounds a year lost to scams alone.
It's a constant arms race between the bank and the fraudster.
When one data leak happens on the Internet, how many times is it sold on the dark web?
I think that's the headline, isn't it? That AI has made scamming and scammers emotionally intelligent and able to respond to things with extraordinary speed.
[MUSIC FADE OUT]
Tom Parker (00:03.425)
It's 2026. You get a call from your daughter. She's panicked. She's at the airport. Her passport is missing and she needs $500 for a temporary travel visa right now. You see her face on the screen. You hear the shakiness in her voice. You even see the flickering fluorescent lights of the terminal behind her. In 2023, you would have sent the money. In 2026, you hesitate.
Because you know that for $50 on the dark web, a deepfake as a service bot can scrape her social media, clone her voice, and simulate the airport background in real time.
Now imagine a finance worker in Hong Kong was invited to a video conference with his CFO and several colleagues. He saw their faces. He heard their voices. They told him they needed an urgent confidential transaction of $25 million. So he did it. What he didn't know, however,
was that every single person on that screen, the CFO, the legal team, the interns, were all digital ghosts, a deepfake. And the money, well, that's gone.
{PAUSE} This is the trust gap. It's the moment where the cost of being wrong is so high that we stop believing our own eyes. According to the World Economic Forum's Global Cybersecurity Outlook 2026 report, we've hit an historic tipping point. For the first time, CEOs now rank cyber-enabled fraud as their number one digital concern, officially overtaking ransomware. The threat is no longer about a hacker locking your files, it's about a machine mimicking your identity. But why does this matter? Well, because when trust collapses, the economy stops. If you can't verify who you're paying, you stop paying.
If you can't verify who you're hiring, you stop growing. We are no longer just defending data. We are defending the very possibility of digital trust.
Well, on that somber intro, welcome to the next five podcast. I'm Tom Parker. And today we're discussing the rise of digital ghosts, the death of the voice call and the defensive technologies emerging to save our digital identities, economies and society itself.
Tom Parker (02:28.885)
Joining me to examine the future of fraud in our digital world are three experts, or could I even call them defenders of trust. First, have Hubert Behaghel, CPTO at Veriff, an identity verification software company.
Hubert (02:45.926)
Thank you very much for having me and I'm really excited to see how we're going to solve that together.
Tom Parker (02:51.669)
Pleasure. Well, next we have Gareth Murray, Finance. Next we have Gareth Murray, Financial Crime Senior Director at Monzo.
Gareth Murray (03:00.76)
Thank you, Tom. Great to be here.
Tom Parker (03:02.729)
And finally, Simon Miller, Director of Policy at CFAS.
Simon Miller (03:07.67)
Hi Tom, delighted to be here.
Tom Parker (03:10.613)
Well, it's great to have you all here. Hubert, let's start with you and with the who. We've come a long way from stolen IDs and credit cards. With the access that criminals have to tech, I'm now hearing phrases like Frankenstein identities. These are synthetic personas with real credit scores and employment records. Can you give us some context as to where we are now when it comes to identity theft and how technologies like yours distinguish a
personless perpetrator from a real human when their digital footprint looks perfectly authentic.
Hubert (03:47.772)
Sure, thank you Tom. So the thing is, we all know that the Gen AI revolution that we're all going through is creating new abilities to create fakes, deepfakes in particular. And it's true that we see the fraud community to definitely embark on harnessing this new technical power. I think we've seen an 85% increase year over year since last year on using this type of vector to attack our customers.
They were 300 % the year before. So I would say it's still adopting, but probably not much more than the rest of the society is adopting this kind of technology. Now, how do you distinguish the person as the perpetrator from the real human being? So on one side, of course, there is a very interesting AI fight that's happening where you do this kind of pixel-level forensic and you really try to understand what's true or false.
But the truth is the real battle and the real way to fight is actually elsewhere. And it's in the ability to have this multi-layer approach to identifying what's happening with this interaction. So one thing that's still very important is as much as we can see some successes from the machine and AI to trump the human eye, particularly, but also more and more the computer vision.
You still need, once you have this kind of content to be able to inject it into the flow, right? For instance, if you're onboarding on Monzo, like it's not enough to just have the media and the ability to generate video live. Like you need to be able to, and that is taking into account two very strong expertise we have at Veriff. One is the ability to do network intelligence, you know, where does the signal come from and really device intelligence as well. Right. And then another thing just to humor you, because there is a lot I could say here.
is that the human-in-the-loop is still very important as well. And even though it may not always be live involved, the ability, for instance, what we tend to do is having operators that are surveilling the whole operational plan across all our customers and being able to detect pattern that may not be caught by the AI, may not be catch by the human fallback we have sometimes, but still being able to create rules or even infiltrate fraud rings and being able to anticipate the next move.
Hubert (06:08.08)
These are very important aspects that allow us to go much beyond what can the technology do in terms of faking a video.
Tom Parker (00:02.097)
Gareth, this puts banks in a tough spot. We've seen deepfake related fraud attempts in the financial sector skyrocket by over 2000 % since 2022 and now represents six and a half percent of fraud in the sector. When a low level scammer can rent for a monthly subscription and injection bot to bypass a camera during a check. How do financial institutions prove a human is actually there?
Gareth Murray (00:40.909)
That's a great question. I think for us, the key to this is the layering of defenses together. And the real key is the more protection layers you have, the more secure your regime is, even if one of those layers itself is weaker at any point in time. So if we think about the ID video, for example, that could present a vulnerability, but when set against the number of other controls and data points we monitor for that customer,
overall it creates a stronger regime that helps protect against those cases coming through. I think one of the areas where across the industry is understanding those vulnerabilities within that sign-up journey. Understanding where you can see in our previous cases coming through, understanding where you can know where these vulnerabilities exist and where you can plug those gaps. And it's a constant arms race between the bank and the fraudster. Where can you stay that one step ahead in terms of
defeating the threat before it becomes an issue on your platform.
There are a number of tried and tested methods that continue to add value across this journey. One example I would say is credit bureau data. It's something that's existed for a long time across the financial services sector, but still offers a great sense of is this person real?
Gareth Murray (02:55.881)
If someone has an established credit bureau file that has long relationships at other institutions, you can have greater confidence that person is a real person. You then combine that with connecting, well, is that person the person you're seeing in the signup flow? And across a number of those data points, how do we join those connections together? So I think it's really, really exploiting new technology, trying trusted methods, as much data as possible to make a high confidence decision.
and then bringing that all into the same journey to make sure you're given that confidence that you are dealing with who you say you're dealing with.
Tom Parker (03:29.295)
Well, Simon, if these digital ghosts do bypass the first door, they often move on to the next. CIFAS promotes a communal defense where institutions share anonymized fraud signals. But in a world where AI-driven threats are the most consequential force in our economy,
Tom Parker (03:56.805)
How do we speed up this sharing to stop a scammer who is moving at the speed of an algorithm?
Simon Miller (04:01.561)
Yeah, it's a great question. So you're absolutely right. At CIFAS, we've been sharing data between our 800 members for the past almost 40 years. And that data sharing is a critical part of the fraud prevention defenses of banks, of telecoms companies, of tech companies, and a whole host of others. And cumulatively, through the use of those services, those institutions prevent about 2.4 billion pounds a year in fraud losses. But I think there's a real danger that we can kid ourselves that we can stay
ahead of that movement. It's worth providing some context here. Fraud and scams are now everywhere. They've been supercharged by a whole host of technological developments and enhancements, and AI is only driving that faster. And I think it's worth providing a little bit of context here. So if we're talking about the UK, according to the UK government and the recently published fraud strategy, 14 billion pounds a year lost to scams alone.
by some estimates, we lose 219 billion pounds a year in fraud. These are huge numbers. But I think we go back to the heart of the question. The only viable defense is that sharing of data, but it's the sharing of data to keep fraudsters be a virtual real off platform services in the first instance. So we know
that through institutions like the Global Signal Exchange and others, through the sharing of critical data and signals, particularly around suspicious content, that we can keep our platforms, we can keep our services that much safer. And prevention in this space needs to be where our efforts are, rather than trying to block the fraudsters and that final moment.
Hubert Behaghel (05:53.646)
Yeah, actually, I would like to reinforce what Simon was saying, because it is true that we are actually in a war with the population, the fraudsters, who are highly innovative and always ready to jump on the latest technology. But really what makes them really powerful is they share. They share the MO, they share the data. When one data leak happened on the Internet, how many times is it sold on the dark web? So there is this compounding effect on the fraudster side, and we need to definitely tackle it on the
the fighter side on the good side by also compounding the data. And it is true that this speed of sharing is going to become more and more an important aspect. At Veriff, we are also building this concept of cross-linking, which allows people in the same industry to actually share the fraud that Veriff sees on one side to be able to preempt it immediately on the other side. Exactly like Simon said, it's not enough to be very good at having the whole detection things.
We need to move at the offensive and be proactive and anticipate the next fraud that's going to happen through this sharing.
Tom Parker (06:57.701)
Well, I mean, you've championed behavioral biometrics, identifying how a user interacts rather than what the data they provide. So this is the MO versus the data is that sharing community. And there's clearly growth in this area with the market projected to hit nearly four billion dollars this year and 11 billion in the next five years. But Hubert, what specific non-human patterns is your AI looking for to catch a bot pretending to be a person?
Hubert Behaghel (07:26.018)
Okay, so first we're going to go quickly on the computer vision side. And here, as I hinted before, all this pixel-level forensic is because the media generated is not really perfect. And that's already a one layer of non-human. And then we are going to look at the integrity level of the device. So for instance, we can cross validate a lot of things. If you send me a video from that type of device, is this device known for creating this format, this kind of scaling, this kind of definition?
But then again, another thing I can do is some device have sensors, of course, right? And I can verify that the tilting of the device and the video movement are actually correlated. You can see how there is this ability. And there is really a compounding effect of all these signals. So you can start with a lot of weak signals and still create a very strong conviction that we are dealing with a fake. And then there is this body of data that I was saying before, which is once you have all these signals individually and you compound them into scoring, fraud scoring, then you can overlay it with what Veriff knows in terms of MO, we integrate with CIFAS as well, and then we have all these little signals and their signatures, and then we can also kind of decide whether they are correlated with past activities that have been known to be fraudulent. And then ultimately, we have a long tail. And this is where the human element at Veriff is useful. It's like, you I think you understand how machine learning works. We need a certain level of data to be able to be good at having the machine to catch situations and learn, right? But if you have also the right anomaly detection and human beings behind the screen, then you can also overlay. And here it looks more like the matrix. You know, there is nothing human. There is just a lot of different signals, but then ability then to quickly investigate and take action and block this kind of situation.
Tom Parker (09:25.137)
I want to lean on the human part here because Gareth, coming to you, your voice is now becoming a weapon. All of our voices, if your voice is online, it's now weaponized. It used to take minutes to clone a voice. Now it takes three seconds, which is a scary statistic. Has the secure voice call officially died as a verification tool for banks? And if so, what replaces it for the customer who just wants to hear a human voice?
Gareth Murray (09:52.459)
Again, I would go back to the point of the layers of defenses we can provide still make a voice call a secure method of communication with the customer. And I think laying it in with how the app works can give that most protection.
So you may see that some banks, when you call their call center, they may ask you to log into your banking app on your device at the point you're speaking to them. We may do that as well. And that gives us confidence that you have access to a device we recognize. What we would also ask for in some situations is an ID from a customer. And so asking them to present a form of ID live in the flow as we see it. So I would say Voice Call still has a place within financial services. I think the interesting area and something that the opposite side of this is, criminals using more AI now to be able to scam customers. And particularly one of the areas that's been seen a lot is kind of impersonating the bank and being able to speak to customers to get them to move into areas. It's something we see a lot across the industry is the safe account scams of the bank rings you and says your bank account's been compromised, please use your money into other areas. Again, what we've been able to do through the app and through that channel is a service we call Monzo Calling and it tells you whether Monzo is actually calling you in the app. And that's again connecting the device with the app live so a customer can say, actually this isn't Monzo calling me because my app says you're not calling me right now. We have over a thousand cases a month where customers report fraud to us.
where they've used that tool and they've said, a fraudster has called me pretending to be from Monzo. The app told me it wasn't Monzo. Can you help me? And since we launched that tool, which was the first industry, we've had over 18,000 of those cases where we've been able to prevent fraud happening because of that call status feature in our app.
Tom Parker (13:04.133)
Wow, I mean, that's a great stat for the defense team. but I want to, Simon, if I can still talk about the nefarious side as well, because with AI, there's becoming much more of a sort of terrifying psychological edge to scammers because they're using AI to monitor social media for life triggers, like losing your job. And then they launch perfectly timed help bots. How can policy protect the most vulnerable from scammers? That are now programmed to be, if I can say, emotionally intelligent.
Simon Miller (13:38.977)
I think that's the headline, isn't it? That AI has made scanning and scammers emotionally intelligent and able to respond to things with extraordinary speed. So, scams and frauds work because they are topical. They play on our insecurities. They play on our vulnerabilities. And historically, that's been at the meta level. And we see that playing out with the abuse of people seeking to flee conflict in the Middle East, currently, whether it's UK.
citizens trying to get back from Dubai or others. They are being defrauded and scammed on a routine basis. But this ability to comb social media for your own content and then play it back at you turns the meta to be absolutely personal. And if you think about whatever you put into the public domain can be turned against you and...
become a point of vulnerability that is then abused for the financial gain of others is, as you say, a genuinely terrifying thing. But how do you legislate for that? We know that half of these technologies aren't commonly available. They are used by criminals and fraudsters. They are brought as fraud-as-a-service from the dark web and other sources. So the question is, how do we equip
ourselves as individuals, consumers and citizens with the tools that we need to appropriately address content that we weren't expecting?
And that goes to education. We need much, much greater education. And not education that tells you to be careful of frauds. We know that has limited impact, if any impact, but we need to be taught how to become content literate, how to become content skeptical, so that when things do arrive, we're equipped with a toolkit that enables us not to identify the deepfake, because that's pretty hard going and beyond the ability of most of us, but be able to...
Simon Miller (15:23.037)
ask some core questions. Is this actually for me? Was I expecting it? Should I approach this content with due caution? And I think if we can from a very early age instil these really quite basic lessons in into our people in getting this type of
I guess, education into the core curriculum, we may be able to start making a difference here. And if you look at the data that we've recently published, it's really clear that it's the young and it's those who are financially challenged who are most often abused by this type of fraud or this type of scam. So equipping those people with the tools is absolutely critical. Otherwise, we end up in this terrible situation where it's not just sad that people are being...
defrauded and scammed. It's those who are most vulnerable who are at the sharpest end of it. And that cannot be right.
Tom Parker (16:14.363)
Yeah, absolutely. Well, Hubert, as we delegate more and more of our personal and working lives to AI agents, are we not heading towards a machine on machine mayhem? If a fraudster's AI agent negotiates with a victim's AI assistant to authorize a payment, where does the liability lie? Who is responsible when it's a personless perpetrator and a personless victim?
Hubert Behaghel (16:40.546)
Yes, that's definitely a big question right now. And I would say it's not super difficult to answer this question because the truth is liability is with human beings, right? And there is no doubt. We know that that's the only way. And therefore, in the end, the solution is not to add more machine to this problem to solve the machine on machine mayhem, as you say. It's really about having the ability to bring back the human in the picture, right? And here,
And I think it's actually a very important aspect that is going to be prevalent in most of our digital lives is escalation, right? It's like, okay, there is this machine-machine interaction, but now there is, I don't know, either a level of risk that's identified or just the transaction itself is risky by nature. You want to bring back the principle, the human being, and you're going to do an authentications check, right? A step-tap authentication. And here, interestingly enough, the biggest part of the
cat and mouse battle that's happening with the fraudsters these days is on this concept of liveness check, right? The first thing you want to do is make sure that when you bring a human being in the loop, you actually, it's a real human being, right? And so for instance, what we have learned at Veriff is this human challenge needs to be, contextual. So sometimes when we support the gig economy, for instance, we're supposed to know where they are geographically. We're supposed to understand a bit the background that should be there in the picture. So this ability to refine based on the context is important. to be able to say, yes, this is a human being and the context in which he or she is actually relevant. That's a big axis for our research and that's really how we kind of fight these bots.
Tom Parker (19:05.297)
Simon, I want to come back to you. There's another term out there that I keep on hearing, which is the liar's dividend. It's where real criminals dismiss genuine evidence as AI generated. How does our legal and social policy evolve to protect the truth when 60% of people think that they can spot a fake, but in reality, it's around 0.1% of them can?
Simon Miller (19:33.333)
It's extraordinary, isn't it? It just reflects the fact that those who think they are most likely to be able to spot a scam are also those who are most likely to be scammed. What an extraordinary challenge that's been posed by IO where the illegitimate is utterly indistinguishable from the legitimate and in fact can be cast as the legitimate. So we need a legal framework that evolves and we need a legal framework with pace. There is, I think,
good reason to believe that the Online Safety Act in the UK enables regulators to move with some level of pace to start addressing this. And other examples globally as well. So South Korea is bringing forward
a really quite sophisticated legal framework around AI content, including watermarking. But the challenge is never what this means in a necessarily illegal context or what this means for banks or other organizations, it's what it means for the individual and what it means when we as consumers and citizens are confronted with this content, which is why, as per my previous point, education is so absolutely critical. We need to get people to be more skeptical in relation to the content that they are
presented with particularly when that content is unrequested. But it also points to the need for safe harbors and also trusted custodians of verified intelligence and data. Organizations like CIFAS have a real role to play in this, as also do Veriff, that we can provide anchors of truth where everything else can be spoofed or denied.
And I think that's something that we really do need to be concerned about. It's not just a theoretical threat. So our data release looking at fraud reported by our members to us in the year 2025 showed a massive and steep rise in high sophistication, impersonation and deepfake enabled fraud. So without those trusted anchors, without that greater skepticism,
Simon Miller (21:30.187)
We open ourselves up to real challenge, but we do know that policymakers are engaging with this debate. Things are moving forward and some good examples of that and that there is a basis for that law to move forward quickly if it is required to do so.
Tom Parker (21:47.823)
Given all of this, Gareth, is the bigger threat to financial institutions specifically the actual financial loss to online payment fraud, which is expected to exceed three hundred forty three billion dollars globally by twenty twenty seven? Or is it the psychological trust gap that I started the show with that might lead customers to want to retreat from digital banking altogether?
Gareth Murray (22:14.157)
So I think digital banking isn't going anywhere, but it's right that we react to these risks and that we take the appropriate action. I think within this, keeping the customer at the very center of thinking about how to solve the problem is really important. We commissioned some research that looked at, for customers, the emotional impact of scams on them rather than just the financial impact.
83% of those customers we spoke to said the emotional impact was more significant for them than the financial. And they suffered feelings of violation, embarrassment. Some of them didn't want to share it with their family because they felt kind of so affected by it emotionally. So we're really thinking about how we can bring together teams from design, from user research, as well as fraud experts, as well as machine learning AI experts to think about how can we protect those customers.
falling victim to fraud and not just suffering the financial impact and that financial impact being passed onto the bank, but also protecting them from the emotional impact as a more important thing there to kind of stop that trust being eroded.
Where we think about this as well is in terms of building trust is where can we put more of the control into the customer's hands? Where can we give them the tools to Simon's point earlier to be able to protect themselves? Fraudsters inevitably will exploit the person and try and find a way to scam the person rather than just kind of a system in this area. And so how do we protect the person and give them the right tools to do that?
There's a couple of things we did at Monzo where we added some features called added security, which are fully opt-in for customers. And they can choose to kind of give themselves more security in the app if they want to. Such as one called trusted locations, which a customer can select in the app.
Gareth Murray (24:11.425)
That for them to be able to send a payment, have to be physically located in one of a number of locations, that could be their home, their work, a family location.
There's also another tool called trusted contacts. And sometimes customers want a second opinion on a payment before they send it. Just to make sure, is this right? And to Simon's point, that questioning, is this something that's unusual? Is this something I should be concerned about? Actually taking that moment to pause. So I think for us, it's about the trust gap can be bridged. If you think about putting the customer at the heart of it, protect them emotionally and financially. And we're looking where we can have their back through those journeys.
But really given that sense of trust to the customer, where can they help themselves? Where can they add that security in their daily lives? So it means that digital banking is accessible and safe for them continuously.
Tom Parker (25:51.407)
Hubert, I want to look at how we rebuild trust in the system. So looking five years out, will traditional concepts of know your customer be replaced entirely by a continuous biometric-based digital identity?
Hubert Behaghel (26:15.34)
I think it's fair to look at it like this. I have to say there's going to be so many different paths that this can take. I would stay open-minded. where I think I'm with you is that on one side, biometrics has to play a big role. Both because on one side, it's this one thing that's extremely low friction. Like we were hearing before, Gareth saying, we want to meet where our customers are the most comfortable. And it's true. This concept of friction is omnipresent when you try to deal with trust and safety. The biometrics will give you this very, you take a selfie and suddenly there is so much information that has gone through and yet it didn't cost you too much. But it's also highly rewarding in terms of intelligence and insight. The digital identity, it is fair. Now we know where we are going to see different paths is governments are more and more going to give us a digital identity and that's going to be useful.
But I also don't know exactly if that's the type of identity that's going to be sufficient with regards to asserting trust. Going back again to some of the things I said before, at Veriff we look at trust alongside three dimensions. Are you who you say you are, so pure identity? Can you be trusted, which is like this extra element of context that for this transaction, for instance, for a loan or for opening a bank account, you don't just need to know
that you are who you say you are. Maybe we want to understand source of income, maybe we want to understand residence. And that doesn't come through from a digital identity that would be issued by a government, at least as we understand it today. And then the last element is, are you a human being? And we covered a bit of that part before. And then you said continuous, and I think it's true. Now here, we can see different scenarios playing out as well. So on one side, it could be that most of the time you're actually going to go with a pre-verified KYC. You do it once a year say, The other aspect is what I said before,
Hubert Behaghel (28:35.768)
you can't do anything that's predictable. So even though you have this verifiable credential and you have this yearly refresh, you need to introduce this serendipity. So this ability to escalate and do this authentication challenge your work continuously is very important because they are not just here to check, they are here to keep up with who you're becoming, you know? And so there is this continuous reinforcement.
This is really an important part of staying with strong identity and a level of trust that's really scalable.
Tom Parker (29:13.617)
Okay, so it might not be a biometric-based digital identity future, but Simon, we've talked about human in the loop. But for high-risk actions, is the answer actually returning to a low tech version, like a family code word or a physical hardware key that can't be spoofed by software?
Simon Miller (29:37.173)
It's a really interesting question and somewhat paradoxical, isn't it? Because at one and the same time, it's us as humans that makes us vulnerable to the scammers and the fraudsters. At the same time, it's our quintessential humanity, our inquisitism that will keep us safe. So we have a...
a trusted word that we use, well I say we use, I don't think I've ever used it and I realise as I say this we've not discussed how we would deploy it so there's some real learning there for me. But we do have a family safe word and we know at CIFAS there are lots of people who do have family safe words and it does help keep people safe. I think one of the other really important human interactions or introductions of humans into the loop in transactional cases is to get a second opinion.
If you're confronted with something that you weren't expecting or you have any level of suspicion about, ask someone you trust for a second opinion. Get that. Because fraud scams, work on the basis of urgency, they work on the basis of pressure, and simply taking that step helps break that spell. But I think there is a fundamental point here about where it is high value, where something is significant, actually slowing things down, potentially adding degrees of friction.
may be the solution. That's not necessarily saying let's return to a low-tech future or let's go head towards a low tech future, but saying we can deploy technology in different ways, introduce a level of human agency at that point, and that might help keep us safe.
At CIFAS currently we're in the process of bringing forward an app that should anyone make a high value purchase or should credit ever be taken out in their name, it will give them the opportunity to say, no, it's not me. That transaction will flash up as a push notification and you can say, it's simply not me making that and close that down. And you see that's a key means of.
Simon Miller (31:23.737)
denying the fraudsters of the means to steal your identity and take out credit or other applications in your name. So I think the future is going to be a mixture of high-tech and low-tech. Obviously, almost instantaneous transactions are here to stay. They benefit all of us all the time, but we've got to have faith in those systems, and we've got to have faith in the checks within them. So...
I mean, I have huge faith in my bank, but every time they tell me that the payment I'm about to make to my window cleaner is absolutely a scam, it rather undermines my faith in that system. It means I'm less likely to take those personal notifications seriously. And maybe a different approach focusing only on whether it's high value or relying on a degree of human engagement might be the way forward.
Tom Parker (32:13.181)
Well Gareth, I'm coming to you next after Simon's frustration with banks checks, but I wanted to actually pick up on the point he said about a mixture of low tech and high tech and I want to go back to this high tech part of it because there are softwares out there that detect sort of digital artifacts like micro glitches in skin texture or blood flow which is just mind-boggling to me. Are these technical forensic tools the ultimate shield? From a financial point of view we're going to be going that way or you're going to be sorting out Simon's issue of tech push notifications?
for cleaner payments.
Gareth Murray (32:44.077)
Yeah, it's a great question. I think I would summarize this to say is these new technologies, they have a place, but they're not forever. I mentioned before the arms race between the banks and the fraudster and that arms race has existed for as long as there's been banks, whether it's the original check fraud or more recently deepfakes and AI voice synthesis. It's a constant battle to keep pace with the technology that criminals are using.
make sure our technology stays one step ahead. So yes, I think at the moment, things like being able to detect blood flow and subsurface micro-glitches within the skin, they are pretty powerful tools and I think, you know, harder to spoof, but they will be defeated eventually by criminals. And so it's where can we keep that one step ahead and keep finding new ways to do this. So as I mentioned, within the layers of defenses we have, I think there's a place for those areas.
But I think we also have to think about how do we engage with the hardware suppliers that go alongside this as well. Our ability to kind of deploy and more advanced technology and more advanced detection software also is based on what's possible on the device. So right now, like LIDAR and infrared and being able to detect blood flow is possible on some devices that customers use, but not all. But those devices can't do other things. They can't do X-ray. They can't do...
other capabilities we might want to have in future that gives us more security and gives us more options about where we deploy that security in different customer journeys based on that. So an ongoing conversation I think is important is where do banks, where do Veriff, CIFAS working together, we think about what's the technology we think will add most value in future, how do we engage with those hardware suppliers to think about how do we get that onto the device so we have the possibility to use it going forward.
Hubert Behaghel (34:37.238)
I want to emphasize the fact that it's true the techniques we know today that works, they are most likely not going to work tomorrow. The point that Gareth is making that resonates with me a lot is that it's actually about your ability to surround yourself with the people who are going to know how to keep innovating and even bringing, I like your view actually Gareth, where we can bring the hardware guy and the software guy or the machine learning guy and try to see how much we can tighten up the flow.
And so, yeah, I think it's really important to understand that then I think this market is going to have to identify who are the technology builder and the true innovators from, you know, people are more into the aggregator or reseller because this is going to be at such a pace that, you know, if you don't own the technology, if you don't own the ability to innovate, it's going to be very hard to keep up.
Tom Parker (35:29.149)
You just talked about innovation here and what works today might not work tomorrow. It is such a fast paced digital
Tom Parker (35:58.863)
battleground. I want to look at the future now. What are some of the most exciting defences in our trust-based arsenal that you can see coming in the next five years that will protect us? Gareth, let's start with you. Simon, after that, and Hubert, you back us up.
Gareth Murray (36:14.219)
I think briefly there's three areas that I'm really excited about. I think the first is, and Hubert mentioned this, I think the concept of a centralized digital identity has a place within our framework moving forward, agnostic of who provides that digital identity. We've seen examples within the Nordics where applying a digital identity across the society, also being able to use by financial services, has helped reduce fraud significantly, particularly impersonation fraud.
I think so there's areas where it can be employed within the UK to really add value. As long as it comes with the right kind of security, the right authentication, the right kind of framework that sits around that as well, where it provides a convenient, secure capability for customers to share their digital identity online, choose where to share that digital identity in those areas. The second area I find really interesting is our use of AI as the industry. In the past year,
we've reduced the amount of fraud at Monzo by 2.9 times through the employment of AI tools. Well, that's machine learning, our LLM tools across our network. And so we're already seeing the capabilities that us applying AI has. And with the speed of advancement in those tools going forward, we see is far more capability we have to keep deploying that and keep protecting our customers more and more. So really excited to see how that's developing. And the third area that is kind of emerging across the industry
is the offensive capabilities using AI to go after the scammers themselves. So there's a number of suppliers out there that operate what are now called honey bots. And they will use AI to engage in conversations with scammers on different platforms, whether it's social media or chat functions in different apps, to be able to convince the scammer that they're talking to a real person, but they're actually talking to an AI bot that's been created.
They will then convince the scammer to provide account details. And we can then use those account details to make sure our customers don't make payments to those accounts. Or if they're our own account details, we can obviously take action against those accounts. really go in the front foot, use an AI to be offensive rather than defensive, and making sure that arms race is kind of tipping in our favor going forward.
Simon Miller (38:29.763)
So I'm going echo a whole host of things that Gareth just said. The idea of scambaiting and using that data and insight and intelligence against the scammers and the fraudsters is really exciting and hugely valuable. And I love the idea of my...
AI agent doing battle against a scammer so I don't actually have to do anything myself. I that feels absolutely like the dream and where we should be heading. But I think one of big innovations that we need to see if we're really serious about reducing the harm and damage caused by fraud and scams is that instantaneous sharing of data between key organizations to get fraudulent content off platforms to stop it from hitting people's inboxes in the first place. That's got to be key. And I think the third thing that we really need to push forward is we use our phones
as a gateway to our digital lives. But we by and large keep them open. Our phones are jam packed with all sorts of tech and AI-based applications that really can keep us safe. So rather than just use them as a gateway, we also need to use them as a gate to keep us safe as well. So real progress in those spaces would be, I think, a real milestone if achieved.
Hubert Behaghel (39:38.222)
I'm going to take one specific example, which I think allows me to make a point, which is we have this technology that's being tensioned by Adobe, but really is being federated by a lot of the players in the industry, which is called C2PA, right? And effectively, what this allows us to do is to say for any picture in the future we'll have, we'll be able to understand the entire history of this picture, which device took it, was it open in Photoshop, et cetera, and that will be recorded in the blockchain. And that will give us the entire history
impossible to tamper of any document that we want to validate the authentication of. And that in the end is just, and today it's too early, like there are so few devices that have actually adopted this to create this end-to-end chain. But it just shows two things, right? On one side, we've just started architecting trust at scale. And this is exactly what Gareth was saying before. We bring the hardware guy, we bring the software, we bring the protocol.
And then, but it also goes exactly into what CIFAS is doing. That is, it's not just about sharing data, right? It's about, there is a data plane, but there is a whole house around it. And this architecture of trust that's coming for the internet is so full of possibilities. But the second thing is it's also this compound effect. We need to, when you start tightening the house and you have the data that's working for you, then you start really having a big machine that allows us to be much more offensive with the fraudsters instead of just being reactive.
Tom Parker (41:05.477)
As we look at the next five years, what is the one roadblock, regulatory, technical or social perhaps, that could stall our ability to close this trust gap? Simon, let's go to you.
Simon Miller (41:31.437)
The thing that we have to really tackle, what we're serious about getting on top of the fraud problem is our ability to share data with each other cross jurisdictionally. We need a shared single legal framework for sharing data for the purposes of preventing fraud. We don't have that at the moment. Sharing data is a nightmare, although some organizations do imagine some jurisdictions are better than others. But...
We need to act internationally. is an international business. Currently, that is a blocker.
Tom Parker (42:04.347)
Great, Hubert.
Hubert Behaghel (42:06.968)
I would say there is a cold start problem when it comes to digital identity. I mean, we see today what I would call a balkanization of identities where every government is producing tokens, but also more and more online services. But for this to work, you need to have on one side enough adoption so that service provider trust these identities and wants to onboard them. But on the other side,
You need also enough of these services that accepted from one of these tokens to become the one way of capturing our digital identities. So I think here, this is where I find it's going to be fascinating to see the market in the future is chips are going to fall where they fall, but there is this cold start problem and we don't know actually what's going to happen.
Tom Parker (42:52.742)
Gareth.
Gareth Murray (42:55.039)
I think as an industry, our biggest roadblock is something we mentioned before, which is the tension between stronger financial crime controls and increasing customer friction. Customers are using digital banking for the convenience. with scams become more sophisticated, the AI based scams we talked about earlier, banks need to employ more data driven or biometric verification. What we want to avoid is that being seen as surveillance by customers, but we want to make sure it's seen as seamless,
transparent and proportionate in the scenarios it's working in. If it continues to be those things, then we should be able to deploy more of this technology and keep closing that trust gap. If it goes away from that and starts to become intrusive or starts to look like surveillance, that could widen that gap.
Tom Parker (43:38.449)
Let's look at 2030. Will the average listener to this podcast feel more secure then, or will they have retreated from digital interaction entirely?
Gareth Murray (43:54.637)
So I think by 2030, I wouldn't expect that the average person will have retreated from digital life, but their sense of security within their digital life may become lower. And so does it increase their anxiety when using digital platforms or does it increase their awareness in those areas? I think we've touched on it a little bit here, but really thinking about how we can get cross industry collaboration to be able to tackle more of these things, whether that's across the social media platforms, the telecommunication platforms and the banks.
and then the suppliers that we work with that offer these capabilities such as Veriff and CIFAS to really bring this together. How we can do that, how we can really improve the security perception amongst customers within their digital life and give them the tools to be able to apply more security when they need to, I think will be key to bridging that trust gap as we go forward.
Tom Parker (44:46.907)
Simon.
Simon Miller (44:48.121)
I'm gonna build on what Gareth just said. So I think if organizations, sectors and nations keep on trying to defend against fraud in isolation, listeners are gonna feel much less secure because this pervasive sense that fraud is everywhere will only increase. If we can get on top of that problem, if we can get those flows of intelligence moving between sectors and across borders, I think we're gonna feel a lot more secure.
Tom Parker (45:15.12)
Hubert.
Hubert Behaghel (45:16.942)
I’ve decided to have a resolutely optimistic view on that. think the future is in 2030, the average citizen will have a very comfortable digital life because the internet would have actually baked into its infrastructure the concept of identity and trust at the right level. And then there will be this trust partner that will be this one entity where you share a lot of information with so that you don't have to share any more information with the rest of the world, respecting your privacy, but also having this
neutral safe voucher for you being a good person and valid for this different use case in your digital life and that's going to make your life much easier than today.
Tom Parker (45:56.453)
Well, Hubert that optimism has sort of leaked into my final question, which is what would be your hope of where we'll be in five years? I mean, is that your hope or do you want to add some more to that before I go to the other guys?
Hubert Behaghel (46:08.066)
No, I think that's it actually. We can keep it at that. Thank you.
Tom Parker (46:11.249)
Perfect. Gareth, what's your hope of where we'll be in five years?
Gareth Murray (46:15.745)
Yeah, I think my hope is that we've really come together as an industry to defeat the threat from fraud. I think collective defense is the best option we have and also provides a more consistent experience for our customers. I think this requires us all to come together without a competitive mindset. So we're all facing the same threats and same challenges. So sharing our approaches, sharing the technology, sharing what works well, what doesn't work well between us, actually will help bridge.
that gap and help build that more of a collective defense. Within there as well, live data sharing at the point of the transaction between all banks. So we've got greater context in what the customers do and the customer has more information on what they're doing. They can make better decisions within that area. I think the willingness to kind of go outside outside your remit, take some risks and how the technology capability to do that, I think is a challenge in the UK. And I think the regulatory legal frameworks need to be able to support that more going forward. But I think it's a bigger opportunity for us.
I mean, this was touched on before, but criminals operate like a well-owned network. They share information between us. In the UK, the anti-financial crime network is huge. We spend 38 billion pounds a year across the industry on stopping fraud. Our network is huge, but it's not integrated. How do we really bring that into a really integrated network? So we share information, we share learnings, we really help each other get better at this to provide that collective defense.
Tom Parker (47:39.791)
Yeah, Simon, lastly from you, I mean, I feel like you and Gareth are on the same page with this integration and collaboration and collectiveness. But what's your hope for the next five years?
Simon Miller (47:48.501)
So what Gareth has just said absolutely has to be the future. We are fading ourselves, our consumers and our nations if that is not the case. So my real hope, and I'm quite optimistic about this, is that across key nations, even globally, we will have political agreement that fraud and tackling and preventing fraud rather than reacting to it is a priority for our policymakers and then that feeds down into our respective industries.
Well, it's now time to thank some of these architects of honesty that have been with us today. So, Hubert, Gareth, Simon, it's been a real pleasure to have you on the show.
Thanks for sharing all of your expertise and insight.
Gareth Murray (49:19.543)
Thank you very much.
Simon Miller (49:20.217)
Thanks so much.
Hubert Behaghel
Thank you for having us, Tom.
[MUSIC UP]
Tom Parker (48:16.953)
Well, as we close this chapter, it's clear that the battle against fraud is shifting from protecting data to protecting reality. And it's easy to feel overwhelmed by billions in losses and industrial scale attacks. But there is something deeply human about the solutions we've discussed, from the total reinvention of how we prove who we are digitally to the family code word in our pockets. If I can take anything away from this fascinating
slightly scary but important episode. It would be that the next five years aren't about stopping the fakes, they're about building a new infrastructure for the truth and building that infrastructure collectively. We are all trust bound. And if we get this right, our digital identity becomes our greatest strength.
[MUSIC OUT]