Transcript
The Future of AI in Cybersecurity
Soundbites:
TOM: With the advent of AI the cyber security landscape is changing
GUEST Philippe 08:30
06:57 I can create a duplicate of your fingerprints just based on a picture.
TOM: used properly AI can help secure our cyber world better than before
GUEST Vicente 29:44 I hope they will be helping us to protect ourselves way better against anything known and unknown.
TOM: if we build trust into AI systems
GUEST Isabel 14:25 we need to work on this side of the security of AI.
and for that to build the pillars of having trust on AI from the technical point of view, also from psychological point of view
TOM
I'm Tom Parker, and welcome to the next five podcast brought to you by the FT partner studio. In this series, we ask industry experts about how their world will change in the next five years, and the impact it will have on our day to day. We’re continuing our special 5 part mini series where we take a deep dive into the world of AI. Each episode is focusing on an industry sector where AI is having and is set to have a big impact.
This fourth episode in the series is all about AI and cybersecurity, where we delve into the opportunities and challenges that AI brings to the cybersecurity landscape.
MUSIC FADE AND BREATHE
TOM: Cyber security is nothing new, but the playing field is constantly changing. Most of our lives are now online, whether it is personal or professional. And online safety is paramount. With technological advances, like AI, cyber security needs to keep pace.
And the pace is alarmingly quick. There are over 2,200 attacks every day globally, that’s nearly 1 cyberattack every 39 seconds or 800,000 a year.
The cost of cyber breaches is huge. According to IBM, the average cost of a single cyber breach in 2023 was 4.45 million dollars. By 2025, the worldwide total cost of cyber breaches is expected to reach 10.5 trillion dollars annually, up 15% year on year. But the cost can't just be monetary, there are wider human societal costs. No sector or individual is left unaffected. Attacks are levied against public utilities, private sector companies, governments and people around the world. With our future becoming more reliant on tech and ever more interconnected online, how do we keep humanity safe in an AI future where hackers can reap the benefits of AI’s advanced capabilities? On the other side of the coin, can Artificial Intelligence be a powerful ally against attacks and help secure our cyber world.
Vicente 02:08 So the more we explore the more capabilities we find.
TOM: This is Vicente Diaz, Security Engineer at Google
But in what we have discovered so far, where we see that AIs are extremely efficient,
is in terms of helping analyzing malware, in terms of helping understanding better the attacks and providing more context,
Vicente 04:35 I think that most of our organisations are simply buried in alerts and it's very difficult to distinguish which ones you should put more attention to and what kind of tools you can use to quickly go through them. In this case, for instance, when we are using this for malware analysis, it's saving hours of analyst time just by being able to describe what the malware is doing, going through obfuscation layers.
and providing this to the analyst
TOM: Individual users are the access points for most cyberattacks with 95% of cyber breaches based on human error. But there are benefits that machine learning AI models can bring to the game, where human error prevails.
So AI is like all the different security systems we already develop. we are filtering for instance with Gmail 99 dot something of all the phishing that you are getting, right? So we will obviously...
learn more methods with AI to make it even better. But when it comes to the message itself, let's say there is nothing malicious, it's just an email coming from someone and telling you something, then it's up to the human to interpret this, right? Can AI help here? For sure, we can detect patterns, for sure we can see there are these campaigns going on and there will be different ways of detecting something suspicious, but...
Not everything can be monitored, not every single channel between two human beings, electronically or not, are communicating. AI is not going to solve this by per se because the problem we are talking about is about humans.
As it’s been discussed in the rest of our AI series, the relationship between human and machine is critical. AI can’t solve all our problems, nor will it replace the need for humans. It is a tool that can help us if used correctly. The problem is that AI can be used in the same easy to mount attacks, and our online presence and personas are all that attackers need.
TOM: This is Philippe Humeau, co-founder and CEO of crowdsec
say they will scrap all the profiles of all the people of the financial time and try to generate specific mails, specific voice bits, specific videos, specific pictures that will, you know, make it credible that it's your boss talking to you, right, and that they want something specific from you. So before that you used to do it by hand and it could be a lengthy process, so meaning you you were capped by the number of fake emails or fake pictures, fake whatever you could produce
Now, an AI can do this at scale in milliseconds. So if I want to reproduce your voice, since there are plenty of samples of your voice online, I can make a very credible Tom Parker tomorrow speak to someone else in the company. And nobody will figure out that it's actually not you. And I can even make a movie of you. Now remember also that when you're shooting a picture in high res into any place, if I zoom in off on your palm, I can see your fingerprints.I can create a duplicate of your fingerprints just based on a picture. So remember that any digital content can be used and amplified using AI to actually create fake personas, fake content that would trick your colleagues into thinking it's you. Right. So that's the first part of it. The second thing is I know for a fact, and this is what worries me the most actually on the cybersecurity stage, that they are offensive AI trained from vulnerability databases that are known, from white papers that are research grade materials, and all of this is now combined in offensive AIs. And those things can unleash hell in seconds against a target, right? So I know it's still in the training phase in most places, in most countries. It's usually limited to the state and government so far, but I'm worried about the day when private cyber communal groups that are very well-funded will be able to do the same thing and use them at their disposal whenever they want. So I think...this is the most scary part of it.
Luckily Crowdsec is using AI to help security professionals to see where hackers are attacking from and mount a defence.
Philippe Humeau
So defensively speaking, there is really a great upside to AI because as well as it can automate attacks, it can automate defense, specifically for detection.
Philippe Humeau
08:21And so we use it to actually identify IP addresses that are working in a coordinated fashion altogether. So if IPA is scanning you, IPB is trying to breach into your web server and IPC is planting malware somewhere.
This could look like three different events coming from three different IP addresses at three different times, right? So nothing to really correlate. But at the scale of a collaborative network spanning the globe, we would see that IP A, B, and C are working together.
very regularly. And we could tell you A, B, and C are under the custody of the same cyber criminal group. So if A is knocking at your door, you can preemptively block B and C because they will come next for sure in three, two, one, now. And this is the beauty of AI. It's capable of identifying patterns where we don't see any. It's capable of identifying these digital patterns into the digital
And I love it because there are other things we can deduce from that and we can group them, make cohorts of them and tell you the financial time for example. You can defend against those IP addresses because they are content stealers. It will steal your content, translate it and post it elsewhere. And we know it because many people face the same IP addresses at the same time having the same behavior. And the behavioral part of it is what makes AI different from classical algorithms. AI understands behaviors where classical algorithms are really painful to code to understand such kinds of things.
Knowing that you are being attacked is the first key part of any defence. The sharing of data, when attacks happen is a sure fire way to help protect one another against current and future attacks. One important area that this touches on is collaboration. This is where crowdsec comes in, by creating a community defence using AI.
Philippe Humeau (CrowdSec) (34:08.654)
per day, we receive something like 20 million signals. We could not possibly analyze them all by ourselves, right? So we have AI agents doing it for us, right? So collaboration is key here, because if we want to defend against an army, we need a bigger army.
If you look at the biggest names in the industry, ranging from Microsoft to Samsung to anyone you can name, like Twitter, Facebook, they all got compromised. Banks, institutions, governments, like all of them, invest hundreds of millions of dollars into it because this go-alone stance cannot possibly defend against an army. The only way you can win this game is to have a collaboration worldwide.
of means and people. And this is what we are organizing at CrowdSek. We are an open source tool, free for everyone to use, and you share the attacks you're blocking with the tool, and in return, you get the IP addresses that have been aggressing all people around the world.
11:28 So collaboration has been put on the forefront by the US government, by the UK government, by the French government, and by the EU. Everyone is now all over it. We should share together. We should help each other. Sharing is caring in cybersecurity.
TOM One area of concern is the very new threat of AI attacking other AI programs by poisoning the data that feeds into machine learning models such as LLMS. As sharing is caring, Google and others in the tech community are creating best practice research to keep ahead of the game.
Vicente (11:18.994)
Obviously this was something that very, very quickly everybody started playing around and started trying to figure out how they could bypass all these defences. And actually there's been quite some work from the community in this direction. And we were also doing our homework in parallel. So basically you can think about an LLM in different stages, like when you just create the LLM and you start to train it.
What if you poison the data, then in the long term, the answers will be incorrect. And this could be a problem. It could be something that the LLM learns to do under very particular circumstances. And you also need to avoid that. Or it could be that it's giving more information that it's supposed to do. And then in a more generic way, every software is susceptible to being hacked. So how to navigate all this complexity? We released a framework describing all this process, all the different stages and all the potential attacks and the best practices. And actually, I think that other entities like MITRE, are doing something similar, trying to describe the whole kill chain in terms of LLMs. There is quite some research on this. So everybody's aware at every single step what could go wrong and what can be done. But it's something very new and it's an ongoing process in industry and every day we see different applications and LMS are being used in different aspects. In terms of how we conceptualize how to secure LMS, I think it's abstract enough to provide good advice, no matter what happens in the next month, because. There is not an universal solution, but the same that there is no universal solution against Supply chain attacks or universal solutions about software vulnerabilities. There are frameworks. There are good practices. There are lessons learned that are shared, there is training for humans and there are many software solutions helping in this direction, but like it's simply yet another thing we will need to keep being on top of and understanding and refreshing materials as needed.
TOM: Last year in Europe alone, there was a shortage of half a million cybersecurity professionals. On top of that only 25% of cybersecurity positions are also currently held by women. Knowing the importance humans play in the equation, and that AI, for now, can’t solve all our human problems, means addressing the skills shortage and training cybersecurity professionals about AI is paramount.
Isabel Praça
I think that education and training needs to be from early in the education levels.
TOM: This is Isabel Prasa, Coordinator Professor at the Polytechnic of Porto’s school of Engineering and Member of the AI expert Group at the European Union Agency for Cybersecurity
I think that we need students and very young students to learn about maths, to learn about language. We need them to learn about programming, we need them to learn about cyber security, and we need them to learn about artificial intelligence.
What we are aiming for is the AI native generation. So by teaching young students in these fields, I think we are contributing to the cybersecurity professionals of the future. For those that are already in place, there is the continuous learning and an update of course of the skills needs to be done for attracting people to the domain.
Isabel Praça
So for me, it's a message that I believe needs to be on really in the future if we cannot keep cyber as a separate pillar from AI. We need to bring these two domains together. We need to have people with skills in both domains. And we need to start educating people from really early on these two topics.
I can give you an example. In my school, in the polytechnic of Porto, we have a master.
on with a cyber security field. And we has also a master on AI and a master on data science. We then from the master on cyber can select some subjects from the master on AI and data science and the other way around. And I think that we need people with cyber skills to understand what is important for about the scientist, which is valuable data what kind of data needs to be shared, how it is important that it is shared, and from the AI domain the same. What is the type of data, how will it come, what is the domain here? We have very good cyber engineers. We have very good AI and data engineers and scientists, but we need to have more profiles that cover these two worlds.
TOM: With the fast paced evolution of AI and the nascency of it in the cybersecurity industry knowing where we will be over the next five years is a tough ask.
Vicente
17:21 I am absolutely bad at making these predictions. I'm very excited with all the new capabilities. And I think not only what we are doing in terms of detection, protection, making things more efficient, providing more context. All of this is something that is happening now and it will keep developing.
And at the same time, we are going to be more effective when working with big numbers, when working with big attacks, when having to crunch like, I don't know, millions of different records to understand how things are happening. It's a very powerful tool. Now in this collective imagination of AI versus AI, attacker versus defender, and how they are able to, you know, going to the target or protected. I'm excited about this. And I'm excited about the auto protection, the detection capabilities, the auto response, following best practices, the getting better understanding and how attacks can be slowed down, can be prevented, can be stopped, can be investigated inside of a complex environment like a computer network while they're happening. All this is very exciting, but we'll see how this evolves.
I hope they will be helping us to protect ourselves way better against anything known and unknown.
There are some, let's say, elements that are very well understood. They've been studying for years. We have many attacks, examples, we have defenses, but there are many others that I feel maybe we don't. And this is my hope that with AI we will be able to discover a new set of capabilities that will help us discover them.
Isabel Praça (14:05.686)
Well, AI will become a community. I hope that cybersecurity can really bring solutions in the field that are based on AI. Everybody's working on AI and it's like the one that runs fast will be the one that wins and will take the most benefit from AI. What I think is that we also need to work on this side of the security of AI.
and for that to build the pillars of having trust on AI from the technical point of view, also from psychological point of view, to play with it, to understand it, and also from the legal point of view to prove how it works. And I hope it happens also that we can use AI on the really...way we need it. If I may bring some strange example to the table, let's not kill a mouse with an elephant. So if I hope AI will be adopted even more, but I hope that people don't take just the most complex algorithm because it's the new release from a company, but that people look into all the machine learning portfolio, for example.
and choose exactly the kind of technique that can be helpful. Not just because of using the right technique for the right problem, but also to raise sustainability issues and to reduce the impact on the environment. We don't need extra powerful algorithms if we can solve them with more traditional machine learning techniques.
So that's also an important point and an important concern I have on AI in general, and of course, on the solutions of cyber that can use AI in a very intensive way with a lot of data and different and heterogeneous data.
Philippe Humeau (CrowdSec) (46:17.038)
My predictions for the next five years are unreliable at most because the landscape is evolving so quickly that it's a dangerous game.
Philippe Humeau (CrowdSec) (50:39.31)
You know, it's an incredible space, Tom, to be in right now because I've been in cybersecurity for 25 years, right? And for the first 15 years, we've been doing pretty much the same thing, trying to play cat and mouse game, you know, the policeman and the gangster protecting each other, trying to poke the fingers in the holes of the boat. But nowadays it's changing entirely.
we can actually take the upper hand on this defensive landscape. We can help, we can collaborate, we can scale the defense like never before.
AI was the fastest ever groundbreaking technology to be put in so many human hands in such a short time period.
So we can expect that humans with creativity will create tremendous changes in society as we know it. So AI now is proposing a transformation to the whole economy and cybersecurity would be on the forefront of it because we need this to happen. So what I think will happen is we will face a tremendous wave of attacks, automated attacks, that will be increasingly efficient and well-crafted and smarter and faster. Now on the other end of the fence, we will have a ton of people creating very smart programs to defend ourselves. Those virtual canaries, those detection engines that will see what is the problem in the logs, and the humans again will be collaborating to solve a problem at scale. So it will be first of all a problem and then the solution to this problem.
23:34
Also we'll be...able to identify vulnerabilities faster than before. Error in the code. And code will be cleaner than before, because humans make mistakes when they code, where machines will tend to detect them faster than humans would do. And instead of seeing them after, you will be able to preemptively see, oh, are you sure about this line? Looks like it could be a problem if this or that happened.
So the risk is high, but the tools are crazily efficient. And the collaboration has never been as big as it is now. And if we kind of confront this AI landscape, this collaboration landscape, this digital criticality in our life, then we get...a very high-stake game, you know, a poker on the world scale. And I want mankind to win this. I want mankind to win this round of poker because it's so important for us, for everyone. So we need to protect this crazy creation that mankind did, this internet, this sharing of knowledge, you know, and yes, there are bad people out there, but we outnumber them, 20,000 to one. So let's just collaborate together.
TOM OUTRO:
We know that there are benefits, as Vicente described, AI can improve the speed and accuracy of detection and give humans the opportunity to spot and prevent threats without needing highly specialised knowledge or experience. But AI can’t solve all our problems alone. We need to make cybersecurity more accessible to more people and train more cyber security professionals to fill the skills gap. When training occurs, according to Isabel. AI needs to be a core part of the curriculum, as cybersecurity should be for AI education. The stakes couldn’t be higher and everyone needs to understand the rules of the game. Philippe’s final point is clear. Through collaboration we can stop more threats, build trust in the technology and make a safer, more secure ecosystem for all of us. But while AI can be a force for good in this cat and mouse battle for a safe cyber future, the technology can still be used by bad actors to break security protocols, scam individuals, create deep fakes and poison data sets of AI systems undermining our trust in AI as a tool for good. Luckily, as we’ve learnt today, there is still hope that defence will win against attack, AI will help more than hinder, be a friend over foe 26:24 Artificial or not.