In Focus Security

AI-driven threats in cybersecurity

Words: Louise Altvater, Cybersecurity Analyst CSIRT, FCCN, Portugal

Artificial Intelligence (AI) has been a hot topic across all areas of technology lately, recognised for its ability in automating repetitive tasks, analyzing massive datasets at faster speed, assisting in decision-making, improving customer service, and even generating creative content. In cybersecurity, however, the trend has a dual nature: while AI can be a powerful ally for defenders, it can also empower malicious actors to become more effective. The misuse of AI in cybercrime is accelerating the pace, scale, and sophistication of attacks, and traditional security measures often struggle to detect or counter these threats in real time.

One of the most troubling uses is in social engineering, where psychological manipulation is used to deceive individuals into revealing sensitive information or performing harmful actions. Traditionally, social engineering relied heavily on human effort: fraudsters crafting phishing emails by hand, scammers making persuasive phone calls, or manually editing fake documents. Now, AI automates and refines these processes. Malicious actors can generate hyper-personalised phishing messages that mimic the writing style of a victim’s boss, friend, or colleague. Large language models (LLMs) can instantly adapt content using data scraped from social media, making messages appear authentic and relevant. There are even malicious LLMs that were designed for this purpose, but legitimate AI models, like ChatGPT and DeepSeek, can also be manipulated through carefully crafted prompts to produce outputs that could be misused for phishing or other harmful purposes.

Deepfake technology pushes this deception further. AI can now produce synthetic videos and audio recordings that replicate a person’s face, voice, and mannerisms with near-perfect accuracy. In one notable case, a finance worker at a multinational company in Hong Kong was tricked into transferring over $25 million dollars after attending a video conference where every “colleague” was an AI-generated deepfake of a real individual. The scam worked not because of a technical vulnerability, but because the attackers could simulate human presence so convincingly that it bypassed normal suspicion.

Beyond deceptive voices and videos, AI can also create entirely original images that cannot be traced through traditional verification methods such as reverse image searches. Tools like TinEye or Google Lens are effective at detecting whether a visual has been altered or copied from elsewhere on the internet, but AI-generated content often has no prior source. These synthetic visuals are created from scratch, making it impossible to determine their origin or confirm their authenticity through conventional means. This capability is increasingly exploited for fraud, enabling criminals to fabricate convincing evidence, stage fictitious events, or create fake identities that appear genuine even when they have no real-world existence.

Social engineering, however, is only one part of AI’s malicious potential. Attackers are pairing it with automated reconnaissance, scanning vast networks to identify vulnerable systems in minutes, which drastically reduces defenders’ time to apply security patches. Once vulnerabilities are found, AI can assist in crafting polymorphic malware: malicious software that continually changes its code to avoid detection by traditional security tools, like antivirus or antimalware solutions.

Another concerning trend is how AI has lowered the entry barrier for cybercrime. Instead of requiring deep technical skills, attackers can now take advantage of black-market AI tools such as FraudGPT and WolfGPT, sold on dark web forums, which serve as all-in-one malicious assistants. These tools can generate convincing phishing emails, write malicious code, and even provide step-by-step guidance on how to exploit vulnerabilities.

Countering AI-driven cyber threats

Defending against AI-powered attacks requires more than just reactive measures, it demands a proactive and strategic approach. Organizations should start by setting clear objectives for how AI will be used in their security operations, ensuring it supports specific and measurable defensive goals. AI should be integrated with existing security tools, enhancing rather than replacing established protective measures – though some cybersecurity professionals argue it should also be treated as a potential intruder and monitored accordingly, given its capacity to be manipulated or exploited.

Transparency is critical: prioritize interpretable AI systems so that analysts can understand how decisions are made, allowing for some predictability in AI’s actions and to be possible to review its performance. Above all, keep humans in control, AI should assist cybersecurity teams, not make unsupervised decisions. And finally, regularly update and monitor AI systems to ensure they adapt to emerging threats, just as attackers are constantly evolving their methods.

About the author

Woman with glassesLouise Altvater is a Security Analyst at CSIRT FCT, the team responsible for incident response within FCCN, Foundation for Science and Technology (FCT) digital services unit, which aims to contribute to the development of Science, Technology and Knowledge in Portugal. For over three years, she has been part of the incident response team, contributing to the detection, analysis, and coordination of responses to cybersecurity incidents. Among other activities, her work includes conducting web application security audits, identifying vulnerabilities, and supporting partners in strengthening their overall security posture. With an academic background in Psychology, she brings a human-centric perspective to cybersecurity, focusing on user behavior and the human factors behind digital threats. One of her interests within the field is threat intelligence and its role in anticipating and understanding emerging cyber risks.


GÉANT Cybersecurity Campaign 2025

Join GÉANT and our community of European NRENs for this year’s edition of the cybersecurity campaign: “Be mindful. Stay safe.” Download campaign resources, watch the videos, sign up for webinars and much more on our campaign website: security.geant.org/cybersecurity-campaign-2025/

 

Skip to content