At this year’s Security Days, held on 8-10 April, participants were treated to a powerful keynote by Dr Karen Renaud from the University of Strathclyde. Her talk, titled “To trust or to distrust… THAT is the question,” challenged conventional wisdom on trust in cybersecurity, prompting the audience to re-evaluate where and how trust is placed in both humans and technology.
I had a chat with Dr Renaud, during which she elaborated on the themes of her presentation and shared insights from her research into human-centric cybersecurity and the psychology of trust.
Rethinking our trust in technology
“Today, we are often told to trust technology over people,” says Renaud. “But I believe that mindset needs to be challenged. Technology is built by humans, and like us, it is fallible.” A striking example she referenced was the UK Post Office Horizon IT scandal, where software errors led to the wrongful prosecution of hundreds of sub-postmasters. “In this case, the justice system trusted computer outputs over human testimony. That unquestioning faith in technology proved devastating,” she explains. For Renaud, this blind reliance reveals a deeper issue: “We’ve become so used to technology working – like a chair or table – that we forget it can be manipulated, hacked, or simply go wrong.” As a former software engineer, she knows technology’s limitations intimately. “The law in the UK assumes that computer-based evidence is accurate. But that assumption must be challenged. System outputs should be verified, not deemed infallible.”
Trust, but not without caution
In cybersecurity, the mantra “trust, but verify” is often cited. Yet Renaud argues that while this principle can be useful, it’s not universally applicable. “Trust is appropriate when working with trained professionals who follow protocols. But even then, it must be accompanied by oversight.” Incidents like WannaCry, caused not by a sophisticated hack, but by a failure to apply updates, serve as reminders that human error is often at the root of security breaches. “People make mistakes. That’s inevitable. The key is designing systems that help mitigate the risks when those mistakes happen.” One such mistake, she recalls, involved a German civil servant in Hong Kong who forgot to activate his VPN, resulting in a data leak. “The response shouldn’t be blame – it should be understanding. Was he exhausted from travel? Distracted? If we react with empathy, we can learn more and prevent future incidents.” And we get employees who respond to our empathy and understanding. The same goes for reporting cybercrime. “People are often too embarrassed to come forward. This leaves law enforcement with an incomplete picture. We need to make
it safe to admit mistakes.”
Knowing when to trust and how to stay vigilant
When it comes to deciding whether to trust a system or a person, context matters. Renaud urges caution especially when financial or highstakes decisions are involved. “If someone asks you to act urgently to click a link or transfer money, pause and verify. Use a different channel, ideally on a separate device.” She cites a Hong Kong case where a $25 million fraud was carried out with bots posing as executives of an organisation. “The only real person in that transaction was a junior clerk. These are sophisticated operations, and verification is critical.” She also warns of the dangers of voice cloning, now possible with just a few seconds of audio, four to be precise. “Families should consider using a code word, something only they would know to avoid falling victim.” At the core of it all, she says, is the human factor. “Cybersecurity must include people, not just machines. One CISO encouraged staff to report suspicious emails and celebrated their vigilance with a ‘Catch of the Month’ award. That sort of positive reinforcement changes culture.”
Psychology, culture, and trust
So, what makes people trust, or mistrust? According to Renaud, trust is shaped by four key factors: personal experiences, perceived competence, intentions, and observed behaviour. In organisations, these dynamics play out daily. “If someone falls for a phishing scam and is publicly shamed, that doesn’t just affect them, it affects everyone around them,” she explains. “Fear destroys trust. And without trust, organisations cannot thrive.” A supportive response to mistakes is essential. “When employees feel safe admitting errors, organisations become more resilient. It’s not just about prevention, but also about how we respond.” Renaud believes the messaging around cybersecurity also needs to change. “Fear-based tactics don’t work. They trigger anxiety and disengagement. We should focus on empowerment, helping people feel capable of tackling the threats they face.” She supports the gamification of security training as a means of creating engagement. “People like to succeed. Turning security into something fun and rewarding makes it more effective.”
Cybersecurity for everyone
Looking to the future, Dr Renaud’s work is increasingly focused on equity in cybersecurity. Collaborating with the University of Bristol, she is applying the Capability Approach, inspired by Nobel Laureate Amartya Sen, to assess whether security systems are truly inclusive. “Many digital processes assume a certain level of vision, language ability, or cognitive agility. A CAPTCHA might be easy for most users, but what about someone with dyslexia or visual impairments? A 30-second limit for entering the 2FA code on PayPal could be too short for an older person.” She’s particularly concerned about responsibilisation, the growing trend of shifting cybersecurity responsibility onto individuals. “We’re asking people to secure themselves without giving them the tools or skills. That’s neither fair nor effective.”
Supporting older generations, who may be especially vulnerable, is a priority. “Think of the volume of cyberattacks in places like Florida, with a large percentage of retired population. It’s not about throwing money at the problem, but about designing support systems that work.” Renaud is also continuing her research into insider threats, security risks that originate from within organisations. “Unlike external attacks, insider threats come from trusted users. They’re harder to detect and can be more damaging. Understanding human behaviour is key.”
Final thoughts
At the end of our conversation, Renaud offers a succinct reflection:
“Cybersecurity isn’t just a technical problem, it’s a human one. Trust is vital, but it must be earned, tested, and supported by systems that acknowledge human fallibility. Only then can we create digital environments that are both secure and inclusive.”
About Karen Renaud

Dr Karen Renaud is a Reader at the University of Strathclyde and award-winning scholar. Her research focuses on human-centred security, the psychology of trust, and inclusive design in digital systems, and it is driven by her vision: Human-as-Solution, rather than Human-as-Problem.
Read the full online magazine here