THE NEXT FIVE
THE NEXT FIVE - EPISODE 30
Cyber Risk And Security In An AI World: What's In Store?
Cybersecurity, data and AI: the backbone of business






































The Next Five is the FT’s partner-supported podcast, exploring the future of industries through expert insights and thought-provoking discussions with host, Tom Parker. Each episode brings together leading voices to analyse the trends, innovations, challenges and opportunities shaping the next five years in business, geo politics, technology, health and lifestyle.
















Featured in this episode:
Tom Parker
Executive Producer & Presenter
Charlie Giancarlo
CEO of Pure Storage
Nicole Carignan
SVP of Security and AI Strategy at Darktrace
Anthony Ferrante
Global Head of Cybersecurity at FTI Consulting
In today's digital world, artificial intelligence, data storage and cybersecurity are a critical triumvirate, intersecting to form a dynamic ecosystem that underpins modern technological infrastructure.
They are strategic pillars that drive innovation, operational efficiency and risk management. Thus their interaction and integration is key to building resilient and secure digital systems capable of supporting the demands of our digitally dependent future. In this episode Charlie Giancarlo, CEO, Pure Storage discusses how important it is for an organisation where your data is, and how to correctly, safely and securely store it ready for our AI future. Nicole Carignan, SVP of Security and AI Strategy at Darktrace and Anthony Ferrante, Global Head of Cybersecurity at FTI Consulting, further extol why data is the backbone of AI, the importance of securing your data, as well as the vulnerabilities organisations face in a modern digital world.
Sources: FT Resources, WEF, PWC, Allianz, National Cyber Security Centre, McKinsey, UK Gov
READ TRANSCRIPT
- Tech
Transcript
Cyber Risk And Security In An AI World: What's In Store?
Charlie: Data within large organizations, enterprises, has really fallen behind in terms of technology. it puts you both at great risk, but it also means that a lot of that data is inaccessible for the AI plans that you have in place.
Nicole: one third of the URLs that were produced by a large language model were actually not real URLs, host names or domains that could be used to manipulate phishing attacks, malicious redirection, et cetera.
Anthony: the United States Department of Defense, well over a million times a day it's attacked. Imagine if that much data could be processed in a single day and it can allow net defenders to look at that data and say, okay, we were attacked a million times yesterday. but only four of them are actually considered significant risk
TOM: I'm Tom Parker, and welcome to the next five podcast brought to you by the FT partner studio. In this series, we ask industry experts about how their world will change in the next five years, and the impact it will have on our day to day. In this episode we look at AI, cyber security and data, as well the opportunities and challenges that businesses face while navigating this fast evolving digital world.
TOM: In today's digital world, artificial intelligence, data storage and cybersecurity are a critical triumvirate, intersecting to form a dynamic ecosystem that underpins modern technological infrastructure. They are strategic pillars that drive innovation, operational efficiency and risk management. Thus their interaction and integration is key to building resilient and secure digital systems capable of supporting the demands of our digitally dependent future.
So let’s start with one of our pillars, and perhaps the most important, the role that data plays and how we’re using it.
Charlie: We can absolutely be able to size the way that data has exploded over the last several years.
TOM: This is Charlie Giancarlo, CEO of Pure Storage.
Charlie: Perhaps some people in the audience may remember having a disk drive in their system that was measured in megabytes, and today typically in gigabytes. Well, we now measure such things in, every word I'm about to say is a thousand times bigger than the word before. In terabytes, petabytes and now even exabytes. Okay, so that's a thousand times a thousand times a thousand. Data storage within large enterprises has multiplied by at least a thousand in the last 10 years. Okay, and it's probably going to multiply by another thousand even over the next five or six years. So it's just remarkable how fast data has been multiplied. It's almost become you know a bit of a meme in it of itself is that data is starting to become the lifeblood of many organizations and obviously it's the lifeblood of organizations like like Facebook meta like like Google but increasingly of banks of consumer companies that need to understand their customers better and market to them better but also satisfy their needs proactively rather than reactively and even B2B companies, want to understand our customers better, we want to understand our sales forces better. And so being able to analyze our data, have accurate data and utilize it in the right way
TOM: For large enterprises, adopting AI at the accelerated pace required to remain competitive is imperative – as is solving the unpredictability of AI’s evolution. Thus, how data is collected, used and stored needs to be transparent and simplified. The data demands of modern AI is too much for disparate legacy systems to handle. Disorganised and fragmented data is plaguing businesses, and that’s not just in the private sector. According to the UK’s State of Digital Government Review published in January 2025, 70 percent of the 120 public organisations surveyed had a poorly co-ordinated, interoperable data landscape.
Charlie: Data within large organizations, enterprises, has really fallen behind in terms of technology where we have gone as consumers. So what do I mean by that? Well, as consumers, we have largely gone to the cloud now to store of data. And it's made it a lot easier. We can get access to it anywhere in the world from any of our devices, whether it's an iPhone or a mobile phone of some type or a laptop or a desktop. We can even go to someone else's environment and use their computer to get access to our data with the right passwords. So that's a very modern way of doing it. In the case of enterprises, though, each one of their systems, whether it's their credit card processing systems or their email systems, they each have their own store of data. And that store of data is not accessible to other systems inside their environment.
So inside an enterprise, they tend to have very fragmented storage. It's stored all over the place. When they want to analyze it, they generally have to copy it to a new system that is tied directly to the analytics engine or the AI engine. And so they're copying it yet again. Whenever you copy it, it then becomes old. So what's really happening now in our belief is AI is driving the need to be able to analyze data in real time, to be able to analyze the data where it sits.
not having to move the data to where the AI engine is, but rather moving the AI analytics to where the data is. So it's a big change in enterprise architecture for how they handle their data.
TOM: AI lives on data. Accuracy and reliability of AI models relies on the quality and integrity of data. What’s clear is that as businesses and individuals rely more and more on data, and with the advent of AI, cyber threats are a top risk that requires constant consideration by boards, employers and employees.
Charlie: So data now is also, is not only important for the purposes of being able to provide value to an enterprise, but it also now has become a huge liability in terms of its susceptibility to cybersecurity events. A lot of ransomware now is not just possibly encrypting the data, but also taking the data, stealing the data, and then holding it up for ransom.
This data could contain names of customers, details about those customers, and of course, if that gets out into the public, not only is it dangerous for the customers, but of course, it may create liability for the organization that lost that data in the first place. A big challenge for most organizations, and in fact, a question I typically ask them, is do you know where all your data is? And often, they do not know where all their data is. Data inside the enterprise is often copied, replicated for different purposes. And then if the individual that copied that data either moves to a different department, leaves the company, that copy could be forgotten about. And it's often those copies that are forgotten about have been sitting there for a year or two, never been updated with respect to new security policies or so forth. That's the copy that gets exfiltrated and then is held up for ransom. So data is both an asset, but it also can be a liability because of the possibility that it is made susceptible to a cyber event.
TOM: Indeed, according to the UK National Cyber Security Centre, when protecting your data it’s important to know what data you have, where it’s stored and what is the most sensitive.
Nicole Carignan: Well, all AI systems run on data. And so that is the key critical foundation.
TOM: This is Nicole Carignan, SVP of Security and AI Strategy at Darktrace.
Nicole: And when we talk about securing the use of AI systems, it really comes down to securing the use of data. And so storage is that critical component, because when you want to talk about that's where the data resides, key foundation of security, but also data integrity, the quality of the data that's being used to train these systems, and the data supply chain risk for those AI systems.
Nicole: How much of that data does need to be properly secured and stored? Not everything is critical. And also when we talk about the exploding exponential data problem that we've seen over the last two years with generative AI and that we're generating more data than we ever have in the history of man, most of that is inaccurate. Most of that hasn't been grounded or sourced. And so it's not even really necessary to keep store and secure over large, large lengths of time because it hasn't properly been classified and critical to the organization. But you're also talking about, especially as organizations are innovating really quickly, they're generating a lot of data and they don't realize that there is still sensitivities in that generated data, even if it's inaccurate.
So we're talking about a major investment in data integrity in order just to achieve safety and security.
And so when even in talking with my family members and friends who don't lay in the AI or security space, I have to continuously remind them, especially in the case of generative AI, that it is generating new synthetic content based on the data was trained on, not what you're asking it. And so a lot of that the inaccuracies, the hallucinations comes from the data corpuses that it was initially trained on, as well as the biases in its training, which is going to be very different than the biases of the users consuming it. there was a study that one third of the URLs that were produced by a large language model were actually not real URLs, host names or domains that could be used to manipulate phishing attacks, malicious redirection, et cetera. And so that level of inaccuracy, specifically in the case of generative AI, is actually a low level of accuracy to data science standards for most machine learning models. 66 % accuracy is not efficient. We really try to strive for 80, 85, 90 at any given time. And so really focusing on how do you ground that output to ensure it's reliability in its accuracy, but that ultimately comes down to trust. A new technology is not going to be adopted if it cannot be reliably accurate. And we can't achieve autonomy, especially in security, and that's what we need is to achieve autonomy if it's not reliably accurate and humans don't actually have the interpretability or understanding as to how it came to its conclusion. So it's reliant on the data that it was trained on, but it's also reliant on the data that it outputs in its interpretability as to how did the model come to this conclusion? What are the data sources that it pulled from? Can a human validate or verify that output? And how easily can that be turned into an autonomous action?
TOM: WEF’s Global Risks Report 2025 puts cyber espionage and warfare as the 5th biggest near-term risk worldwide. In the UK, however, according to Allianz Risk Barometer Report 2025, cyber incidents, including cybercrime and data breaches are the number one risk for UK businesses. Even though there are still inaccuracy issues with gen AI, it could also serve as a defensive ally against cyber threats.
Anthony: you're as strong as your weakest link.
TOM: This is Anthony Ferrante, Global Head of Cybersecurity at FTI Consulting
Anthony: And I talk to organizations every single day about this in my previous world as a US government employee and now in private practice. Everybody thinks of cybersecurity as this is something, you know, My chief information officer needs to deal with my chief security officer should deal with that, not me. And it's wrong. Cyber security is a team sport. Back to where I started, you're as strong as your weakest link. So everybody's got to be smart on this. Where we run into difficulties is the reality that adversaries are getting smarter and smarter every single day. And so unless you live and work in this space every single day, it's hard.
It's hard to keep up with the risks. And that's where I can see artificial intelligence helping organizations. Again, depending on how it's deployed or leveraged at an organization, it can be used to help bake in those layered defenses at a company whereby possibly artificial intelligence is used to cut out the noise, cut out the white noise of an adversary.
Right? Most organizations are attacked hundreds, if not thousands, if not millions of times a day. Right? I forget the statistic, the United States Department of Defense, well over a million times a day it's attacked. Imagine if that much data could be processed in a single day and it can allow net defenders to look at that data and say, okay, we were attacked a million times yesterday. but only four of them are actually considered significant risk because they're using a technique or a tool that we've only seen from this adversary, this nation state adversary. That is powerful. I mean, it could revolutionize investigations as we see them today. It could allow us to build better defense layers where certain aspects of it could be automated by artificial intelligence and then the most significant top 10 % is then evaluated by a human.
TOM: In this world of data and machines, humans do still play a critical role. Some reports put human error as being responsible for up to 95% of cyber incidents. However, integration of AI into sensitive systems also introduces new vulnerabilities, such as adversarial attacks and privacy concerns. One term to note before we hear from Nicole is CVE, CVE stands for Common Vulnerabilities and Exposures and is used in identifying and cataloging security vulnerabilities.
Nicole: I think that that 95 % stat of human error causing the vulnerability is actually going to decrease over time. In our threat report that we published last year, we saw that actually 40 % of the campaigns investigated started with a vulnerability, started with a vulnerability in some sort of application. That could be a zero-day or an end-day vulnerability depending on when it was attempted to be exploited. And we're starting to see that increase. I think there were 40,000 CVEs that were disclosed in 2024, which was 16 % of the CVEs that have ever been disclosed. And when I say a CVE, that is a common vulnerability in an application that could be exploited. And so that is now a technological vulnerability, and we're innovating very quickly and we're creating new AI systems, new applications. All of those have potential vulnerabilities that could be exploited. Now we have threat actors who have access to the same technology we're innovating with, generative AI, and they're using large language model based agentic systems to identify the vulnerabilities in the first place to attempt to exploit them before they have an opportunity to be disclosed and patched. And that, we see that in Hacker One's latest board where an agentic system is actually the number one vulnerability or identifier in their entire ecosystem. And we saw that over the case of multiple research papers over the last year, academic research papers, that this is how potentially threat actors are identifying their initial ingress. So threat actors are going to go for the easiest vector in. For a long time that was humans. And don't get me wrong, humans are still going to be an easy vector in. But If they have tools that are available to identify easier ingress points where they don't have to do a massive amount of surveillance and research on how to social engineer a specific individual, they're going to use a technological vulnerability in order to get in.
Nicole: There are so many more technological advancements that we could achieve in order to improve the robustness of our defense capabilities. And we have to because we don't know what tomorrow's attack looks like. And that's become more and more evident over the last two years. So we have to have adaptive systems that are continuously learning that do advanced behavior analytics and varying that with anomaly detection as well as risk behavior profiling and pulling all that together with event correlation across these different domains in your digital ecosystem so that you have the full scope of understanding potential incidents and prioritizing where the humans in the SOC are focusing their time first. Instead of having to worry about hundreds of thousands and millions of alerts, you're now focusing humans on a couple dozen of alerts that are the most critical to the protection of your organization. That is a game changer.
TOM: A game changer indeed. In fact, given the speed of evolution in data, Ai and cyber security, we can expect much change over the next five years
Charlie: Well, it'd be hard to be in the technology business and not realize that everything will probably change in the next five years. That is certainly true. When I speak to senior levels within IT organizations of our customer base, I usually start by talking to them about things that actually will surprise them, such as, do you know where all your data is? And they generally have to admit that they don't. do you realize that much of that data has been managed manually and just given people movement and the fact that there aren't records kept, a lot of it has been forgotten about. And if that is the case, it puts you both at great risk, but it also means that a lot of that data is inaccessible for the AI plans that you have in place. And these are important conversations to give senior managers a platform upon which they can start building plans to have a much more automated environment in which to manage their data.If I look at the next five years, the next five years is really going to be about building data systems, building data plans and strategies that allow the data to be a much bigger component of competitive advantage and of cyber resilience inside organizations. Having to think about data and data provenance, how you treat your data, how you track it, how you curate it, how you govern it, and how you profit from it is going to be the critical element. I see the combination of data processing, data storage for AI and cybersecurity as being absolutely and very tightly linked. And it's all based on the ability of organizations to govern and manage their data. The better you're able to govern it and manage it, preferably under software control, the better you're able to track it, the better you're able to protect it with common policies the better able you are to be able to access it for things such as AI or any other purpose inside the organization. So having both a data storage architecture and a data governance architecture that are well thought out, unified and operated under software control, the better you'll be able to both profit and protect your company with respect to the data.
Anthony: Without a doubt, the next five years is gonna be focused on the implementation of artificial intelligence and the various parameters of those implementations, whether it's government regulation, whether it's data classification, whether it's data security, data access. Once artificial intelligence tools are created and in place, it is going to be the usage of those tools, how they're used, are they abused, and the effects of that. I think from my perspective as a security professional and an investigator, I think we're going to start to see more and more of these issues on the horizon. We're already starting to see them today with respect to the implementation of artificial intelligence instances. And I think we're just going to continue to see more and more. And not only in the frequency of these issues, but also the scope and scale of the impacted individuals. And I think that's a real risk that we all need to be eyes wide open to cybersecurity professionals and adversaries, it's a cat and mouse game. It always has been and always will be. And believe it or not, some of the most skilled, ethical, white hat cybersecurity practitioners have learned everything they've learned from the black hat cybersecurity professionals. And that's okay. The reality is, is that people are going to push the envelope or test infrastructure. Again, anything that can be engineered can be reverse engineered. People are going to do that. I just think it's human nature. People are curious. And so they're going to continue to push in that space. the ethical cybersecurity professionals are going to continue to look for that type of activity, identify it, and work to combat it. It is the world we live in today.
Nicole: I mean, the next five years is going to be a really fun five years. So let's first start with the startup space. And I always love the startup space when it comes to like a technological revolution, like we're in with AI right now. So you're going to start to see, which we've already started to see a lot of AI asset discovery. So we're talking about identifying that shadow AI within an organization. But I think that's going to morph into AI security, posture management, similar transitions that we've seen from cloud previously as well as OT. And then we're really kind of trying to wrap our heads around the data problem, right? And so data governance and data classification is king right now, but that's really gonna morph past data security posture management into data detection and response. You're gonna wanna understand the lineage of that data and how it's morphing over time and where it's transitioning to, to really get their arms around data loss prevention. And so I think data security is going to have a big upheaval, which is good. I think too, we're going to start to see a lot more focus on good data science principles. So you're going to see a lot of technology come out on autonomous data integrity solutions. That's really evaluating that data and performing that data classification and governance function, as well as autonomous testing, evaluation, validation, and verification solutions. As we start to adopt more and more AI models, you're going to have infinite outcomes that need to be tested, essentially. And so you're going to look at more of an autonomous testing solution in order to achieve that. But again, more future state. I don't think generative AI is going to dominate the machine learning technique of innovation. I think you're going to see a lot more experimentation in AI, which I think is really cool.And cyber threat intelligence is going to have a complete evolution. For the past 20 years, it's been really reliant on sharing historical attack data, signatures, hashes, IOCs of attacks so that organizations could defend against those. And that is inherently historic and static. And we're no longer gonna be able to depend on that. So think CTI or cyber threat intelligence is going to morph into more behavioral based cyber threat intelligence that's more adaptive and all encompassing of new TTPs or malicious tactics that are being used in order to attack an infrastructure organization or AI systems themselves. And then lastly, I think we're gonna be moving more toward autonomy. And I think agentic systems is the buzzword of the day, but agentic systems are not new. I think it will be achieved through agentic systems, but it's not going to be dependent on one machine learning technology. So you're not going to have just large language model based agentic systems. You're going to have a myriad of different types of machine learning techniques that are powering these agentic systems that are working in collaboration and cooperation in order to achieve autonomous function accurately and with reliability. So that's going to change the way humans do their job because they're going to be uplifted and enabled by AI. But it's going to also create a whole bunch more jobs functions around that because you're still going to have to have the human around the AI systems. So it's no longer human in the loop or human out of the loop. It's going to be a human around the loop.
TOM: Whether it’s human or artificial, it all comes down to data. Organisations need visibility on it, to secure it, and provide clean reliable data to get the best out of AI. It is the holy grail in our digital age and that is why nefarious adversaries seek it so, and why we must diligently store and protect it. Fundamentally enterprises need the digital infrastructure that treats data as a strategic asset. According to Mckinsey, value is found in how well companies combine and integrate data and technologies. Get it right, and in a timely fashion, means your organisation can eke out a competitive advantage. Get it wrong and your organisation is at risk in an increasingly threatening cyber war. Integrating AI into cybersecurity can be a strategic advantage, but it still poses some threat. If anyone is left a little on edge, take solace in the words of Ollie Whitehouse, CTO, UK National Cyber Security Centre when discussing AI “We need to be mindful of it, we need to understand the risks it poses, but I believe the cyber-defence benefit will often outstrip the cyber-offensive gain that our adversaries get.” Given what we’ve learnt today, a risk reward balance of AI must always be monitored, with data safety, security and integrity at the core.