A student gets an email request to update payment details. It looks legitimate: the right logo, professional tone, a link to an official-looking portal. They click.
Elsewhere on campus, a researcher pastes a few lines of confidential project data into ChatGPT, hoping to speed up a funding proposal.
Meanwhile, an administrator picks up a Teams call from someone who sounds exactly like their director, asking for an urgent transfer.
No alarms go off. Nothing seems out of place. But each of these moments — so ordinary, and featuring rushed, well-intentioned actions — could be the start of an AI-powered scam.
Artificial intelligence is changing the way we work, study and collaborate. It’s also changing the way we’re deceived. And the weakness it exploits most effectively isn’t technical at all — it’s trust.
“AI has become a weapon for criminals because it’s so good at manipulating the human element,” says Dr Maria Bada, a cyberpsychology researcher and senior lecturer at Queen Mary University of London. “Now even experts might not be able to identify a deepfake video or phishing email.”
Trust in the machine
We humans are wired to trust the tools that make our lives easier. The more helpful, polished and responsive technology becomes, the less we pause to question it — especially when it saves time in a busy day or simplifies a repetitive task.
AI is now woven into everyday life: drafting emails, summarising reports, scheduling meetings. Soon it will be monitoring our health, driving our cars, and managing our homes.
“Within the next 1–2 years, AI will become part of our everyday lives and habits; meaning many more entry points for potential risk. We’ll be so used to using AI that we’ll trust it completely — and blind trust of technology is a problem.”
Originally a therapist, Maria now studies how technology and human behaviour intertwine — and how that relationship can be turned against us.
Convenience, she says, is seductive. When large language models (LLMs) and chatbots can instantly draft emails or summarise research, it quickly becomes habit to turn to them for help. “People are trusting ChatGPT, sharing work-related data for quick analysis, without being aware that ChatGPT then owns the data,” she says.
What begins as a practical shortcut can slip into something more personal. Maria cites therapy-style chatbots as a striking example of how easily the line between machine and confidant can blur. “Some people are sharing their intimate thoughts and feelings with chatbots. But they need to ask themselves: what happens with that data? Who could exploit it?”
“Sharing sensitive data unthinkingly is where I see the main potential harm for individuals emerging in future. Especially as chatbots are designed to seem empathetic.”
Maria sees the psychological pull of AI first-hand in her teaching. “A number of my students told me they feel they can be themselves when chatting with LLMs. They share thoughts they wouldn’t tell anyone in real life.”
That sense of intimacy is exactly what makes AI-powered deception so potent. When a system seems to ‘understand’ us, our guard drops.
“Trust is the magic word with AI. We feel like there’s no one there, it’s just a machine, so we can say and share whatever we want and nobody will take advantage of it. But it’s important to remember that behind AI, there’s always a human — at least for now.”
How AI amplifies cyber threats
The same qualities that make AI so useful — speed, fluency, and familiarity — also make it a powerful tool for manipulation. From a cyberpsychology perspective, Maria says, AI is reshaping behaviour on both sides of the screen.
Cybercriminals have always relied on human emotions and triggers — curiosity, fear, authority, urgency — to make us click or comply. What’s new is how precisely and quickly those emotions can now be detected and exploited.
“Attackers design messages to trigger fear, urgency or empathy. Now, using AI, that manipulation can happen in real time.”
“People who follow authority are more prone to becoming victims; for example, when a deepfake uses the CEO’s voice. Loneliness and lack of social skills are being exploited. People feel comfortable online, so they trust technology more.”
AI amplifies these long-standing psychological tactics. Instead of mass-produced scams riddled with typos, attackers can now generate persuasive messages tailored to each target — adjusting tone, language, and even emotional cues in real time.
The result is hyper-personalised persuasion at scale: scams that sound like your boss, look like your university, or echo your own writing style.
And because AI tools are easy to use, scamming people has never been more accessible. “AI has definitely lowered the barrier for entry to cyber crime,” Maria notes. “It makes it easier to adapt code and to recruit hackers.”
From awareness to reflection: building critical thinkers
Traditional awareness campaigns teach us to recognise threats: to spot the dodgy link or telltale typo. But as AI erases those clues, Maria believes the focus must shift from recognition to reflection: understanding why we react the way we do before we click.
“Take a breath, take a step back. Critical thinking is now the key when using technology.”
That pause is powerful. When something feels urgent, emotional, or unusually easy, Maria suggests asking yourself: “Is this real? Would my CEO tell me to do this? Would my mum or child ask me to do that?”
This kind of mindfulness isn’t about slowing down, but about learning to recognise the emotional triggers that make us act automatically.
“We need to build the human firewall around AI — strong, aware individuals who recognise how emotions and habits can be manipulated. Preparedness means knowing what triggers me.”
For universities, Maria recommends weaving AI awareness into everyday learning and training, not bolting it on as an afterthought. “Integrate AI-threat awareness into mandatory staff training, and build AI literacy into student induction. We need more knowledge, not just of risks but of how we and others behave online. And greater awareness of what data we share, where, and how.”
Each small act of awareness — when you pause before clicking, or make a quick call to double-check before replying — helps build resilience. Over time, those moments become habit, strengthening not just systems but people.
Too smart to be tricked? Think again
Even so, awareness alone isn’t protection. Once people feel informed, they can easily slip into assuming they’re safe. And that confidence can be just as risky as ignorance, as it can make people resistant to new training or slow to adapt to new threats.
“IT employees are often the ones that do the worst in cybersecurity awareness programmes. Overconfidence is a liability: it lowers vigilance. And meanwhile AI is bypassing all the cues you’re checking in emails.”
When even experts can be deceived, confidence needs to be replaced with curiosity. Staying alert doesn’t mean distrusting every message; it means accepting that no one, however experienced, is immune to being fooled.
The university ecosystem: a shared responsibility
Higher education and research institutes are complex organisms: part workplace, part community, part digital enterprise. That makes them fertile ground for innovation, but also for exploitation: “Universities have always been the low-hanging fruit for attackers, because they don’t have the same security resources as large organisations,” Maria notes.
Within those ecosystems, different groups face very different risks. So a one-size-fits-all approach to security awareness simply doesn’t work.
“Students receive admin emails about fees, so can be easily manipulated to transfer money to a different account. Researchers handle sensitive data, and admin staff manage financial transactions. So what’s needed is a personalised approach for each of these groups.”
And coordinating that approach isn’t easy. Even within a single institution, multiple sub-cultures and systems exist. “Think of Oxford or Cambridge: they are comprised of many colleges, each with their own policies,” Maria says. “Imagine how difficult it is to coordinate around cybersecurity and awareness raising.”
That complexity makes collaboration across the research and education community essential. By sharing tools, examples, and lessons learned, universities and NRENs can help each other strengthen the human layer of security that connects all their systems together.
Think before you act: the antidote to blind trust
Maria isn’t pessimistic about the rise of AI. The problem, she says, isn’t the technology itself but how we respond to it.
“Technology keeps changing fast. But if we combine artificial intelligence with psychological resilience and critical thinking, we can defend against the emotional elements attackers leverage.”
True cybersecurity awareness, she argues, means action as well as understanding. Mindfulness — noticing our impulses before we act — is the practical antidote to blind trust in AI.
“Hackers have the upper hand only because we allow them to. If we think before we act, a large number of attacks can be prevented.”
Trust, of course, remains essential. In research and education — and in society more broadly — collaboration depends on it. The challenge is to make that trust conscious and deliberate: to stay aware even when technology feels effortless. By thinking before we click, share or comply, we keep that trust both human and secure.
Want to hear more? Join Maria on 23 October for her webinar on how AI is reshaping the threat landscape and our own behaviour, and how we can strengthen our human defences.
Maria Bada is a Senior Lecturer at Queen Mary University in London. Maria is a behavioural scientist, and her work focuses on the human aspects of cybersecurity and human-computer interaction. Her research looks at the effectiveness of cybersecurity awareness campaigns and the development of effective prevention activities to enhance the resilience of SMEs in cybercrime.
She is investigating the different types of online harms for various vulnerable groups and their interdependencies, as well as the social and psychological impact of cyber-attacks and the aspects such as widespread anxiety and the social disruption caused to people’s daily lives. She is also exploring the cybercrime ecosystem, studying the profiles, pathways and risk perceptions of cybercriminals and how they form their groups.
She has collaborated with government, law enforcement and private sector organisations to assess national level cybersecurity capacity and develop interventions to enhance resilience. She is a member of the National Risk Assessment (NRA) Behavioural Science Expert Group in the UK, working on the social and psychological impact of cyber-attacks on members of the public.
GÉANT Cybersecurity Campaign 2025
Join GÉANT and our community of European NRENs for this year’s edition of the cybersecurity campaign: “Be mindful. Stay safe.” Download campaign resources, watch the videos, sign up for webinars and much more on our campaign website: security.geant.org/cybersecurity-campaign-2025/
Davina Luyten is communications officer at Belnet. She has a background in translation, journalism and multilingual corporate communication. At Belnet, she focuses on external communication, public relations, crisis communication and security awareness. She has participated in the GÉANT project since 2020, where her involvement includes the annual cyber security awareness campaign.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behaviour or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.