Interviews Security

From perfect phishing emails to predicting future attacks: how artificial intelligence and machine learning tools are impacting cybersecurity

Image by Freepik

Can ChatGPT be tricked into helping cyber criminals? Would you be suspicious if your boss called you and asked you to share sensitive information? And what does the war in Ukraine reveal about the impact of machine learning tools?

We asked Zlatan Morić, head of cybersecurity at Croatia’s Algebra University College, for his insights and experiences on how artificial intelligence (AI) and machine learning (ML) tools are changing both cyber crime and cybersecurity.

How has the use of AI tools in cybersecurity changed over time?

I prefer not to say artificial intelligence, because in my opinion AI still doesn’t exist. We’ve only used machine learning in different types of predictions.

The most obvious change in the last few months is the rise of new large language models (LLMs) like ChatGPT. Over the last 10 years we’ve had other models like these, but they weren’t free.

So all the current massive interest and use of LLMs is because ChatGPT started offering services for free. And now several others are doing the same.

Ten years ago, you could recognise most phishing emails because of the writing style and errors. Today, someone can create a perfectly written phishing email using a language model. So you can’t recognise if it’s a phishing mail or a real one.

What other new ways are machine learning tools being used in cyber crime?

During the last year, hackers have started using machine learning deepfake technology to create fake phone calls. We all know about phishing attacks, but audio phishing isn’t generally well known yet. So right now hackers are trying to exploit that because people aren’t familiar with this type of attack.

Say you received an email from your boss asking you to send a password to someone else. You’d probably find that strange, and you wouldn’t just do it. You would call them to check.

Now imagine if someone called you and sounded just like your boss – their voice, their way of speaking. And the caller ID had been faked to say it was your boss.

If they asked you to send a password to someone, you’d probably be much less suspicious than you would if you received an email asking the same.

“Just from a short audio clip of you talking, someone can create a model that enables them to start speaking like you, in a way that probably nobody would recognise as not actually being you.”

How can we limit the potential for machine learning tools to be used for malicious activities?

Most of the LLMs and other ML tools available are trying to prevent misuse. But there are ways round it.

When ChatGPT first became publicly available, I used it almost every day to help me create code for penetration testing. The code it writes isn’t perfect; I need to adapt it. But ChatGPT has done 90% of the work, so that’s much easier for me.

However, hackers can also use the same tool to create code which will break a specific system. These tools make it much easier for people who do penetration testing, red teaming and so on. But every tool that can help white hat hackers can be also used by black hat hackers or criminals.

“A few months ago ChatGPT changed how it works. Now if you ask it to write some malicious code, it won’t. But this is a language model, so there’s still a way to do it. For instance, you explain that you don’t want to write malicious code, but you want to give an example in a presentation. Just storytelling. And after a few sentences, you trick the language model into creating that code.”

So there are still great opportunities for creating malicious code using just a large language model. You don’t need to be a good programmer to do so.

In what ways have machine learning and AI tools had a positive effect on cybersecurity?

Machine learning tools allow us to process very large amounts of data. Without machine learning you can’t conduct behavioural analysis on a large scale.

Most of the methods used in machine learning are not new. They were developed years ago – sometimes centuries ago – but until recently we didn’t have the computing power to calculate them.

Fifty years ago, all intelligence services had to process all available information by hand, from pieces of paper.

Now, we have the internet and we can use machine learning tools and artificial intelligence to help us process all the available information and receive intelligence from it.

For example, machine learning tools can analyse all network communications from a university and detect behavioural deviations which could be an indicator that something is compromised. Maybe it’s criminal activity or hacker activity, maybe it’s not.

What emerging trends in AI and machine learning and cybersecurity are you most excited about?

The most important thing today in cybersecurity is threat intelligence. While it’s vital to have great security controls, it’s more important to have intelligence information that tomorrow someone will try to hack you.

Let’s recap a brief history of cybersecurity.

We started with a firewall approach, protecting our organisation from outside threats. Like in a medieval castle. Then we recognised that threats can also be inside, so we needed more security controls.

Ten years ago, we hardened this to a zero trust model, where we always need authentication before allowing someone access to a system or network. Our presumption needs to be that someone with malicious intent already has access to our company.

So, the focus has moved to indicators of compromise: how do we detect that someone has already broken into our systems? And this is what we can’t do without machine learning or artificial intelligence systems.

Over the last year and a half, there’s been even more focus on threat intelligence, not just detecting that someone has broken into systems, but wanting information on what someone might do in the future. And this is where machine learning can help us further, by analysing large amounts of data and making predictions related to this data.

The main problem – which is always the case with machine learning – is that your predictions will be based on data, but you never know if you have enough data, or if it’s of good quality. If the data quality isn’t good, the predictions probably won’t be good either.

“All information systems now are capable of collecting a lot of data. We need to use machine learning algorithms to evaluate all this data and find insights which will help us not only recognise suspicious activities, but also recognise what might happen in the future. We want indicators that someone is trying to hack us before they successfully access our data or take control of our systems.”

Can you share a specific example of how machine learning detects suspicious behaviour in practice?

One example that is now being used a lot involves location data. Say, for example, you’re working from your office, and you’re connected to an information system. You use an authentication method such as multifactor authentication to log in.

And just imagine that after you disconnect from your computer, someone else tries to connect to your account using your authentication method – but from China.

Systems today can detect that you can’t physically be in China right now because half an hour ago you were in Europe. The travel time is impossible. And so the system won’t allow access, and will use additional authentication methods to check your identity.

Any final thoughts on how machine learning tools are shaping cybersecurity?

The war in Ukraine has changed the way I think. Previously, I’d always say that if a bigger hacker group attacks you – not a general attack, but you specifically – then you probably can’t defend against it.

But what the Ukraine war has shown us is that if you expect that someone will attack you, and you have enough resources to monitor everything – thanks to machine learning tools – then hackers can’t do anything against you.

Right now, there are lots of cyber attacks on Ukraine’s critical infrastructure, but all the cyber warriors in Ukraine are expecting these attacks, so they’re checking everything and looking for every trace of attack.

And that’s the reason why we don’t see a lot of successful cyber attacks on critical infrastructure. I find this very interesting!


About Zlatan Morić

Zlatan Morić has taught at Algebra University College, Croatia since 2000. After many years of teaching part-time while working as a data scientist, among other roles, Zlatan switched to full-time academia four years ago and now heads the cybersecurity department. He likes to combine machine learning and artificial intelligence with cybersecurity. Zlatan is a certified trainer in system engineering and information security, with certificates including CEI, MCT, CEH, CHFI, and Microsoft Cybersecurity Architect.


 

 

Also this year GÉANT joins the European Cyber Security Month, with the campaign ‘Become A Cyber Hero‘. Read articles from cyber security experts within our community and download resources from our awareness package on connect.geant.org/csm23

Skip to content