We had the pleasure of talking with Dr David Modic from the Faculty of Computer and Information Sciences from the University of Ljubljana, who will give the closing keynote “Rethinking human vulnerabilities in cybersecurity” at GÉANT Security Days 2025 (8-10 April, Prague). In this interview, we explore Dr Modic’s extensive research on the relationship between psychology and cybercrime.
Thank you for joining us, David. To start, could you share your thoughts on how psychological factors impact individuals’ adherence to security protocols? Additionally, what strategies do you think organisations can use to improve compliance?
There is generally a consensus that we do not usually follow security protocols and the reasons for that include (a) the oversaturation of our space with security advice, where each stakeholder believes that it is only their advice that is incredibly important and will keep the individual secure. Now, if only everyone else would get on with the programme and stop bombarding people with security advice and protocols, that would help them to pay attention to the really good advice. The issue is, of course, that everyone thinks their protocols and advice are important. (b) Most individuals (especially in companies) do not know what security policies are in place, so it does not matter how brilliant the advice is or how psychologically well-adjusted and compliant individuals are if they have no idea what is expected of them. (c) Some researchers will tell you that it is irrational to follow advice or security protocols anyway, as this wastes your time and energy, while the end-result remains the same – practically all the time, nothing problematic happens whether you follow advice or not. And, often, if you follow the advice to the letter, you still get breached.
In this, hides one of the answers to the question of what to do. (a) Help people to understand what security advice and protocols are sensible to follow and offer the most bang for the buck (from the perspective of investment and safety), and (b) prepare them on what to do (and how to manage their emotions) in case they eventually get victimised.
As cybercrime evolves, in your view, what skills need to be developed in cyber warfare, what are the most pressing challenges in defending against cyber attacks today?
I believe that we should not treat people as dimwits and trust them to be able to grasp why some actions are good for them. If they do not understand what we are teaching, then this might be our fault for not being clear enough, not theirs, for not having a PhD in computer science. Thus, the challenge is to address the actual concerns of the potential victims and help them to protect themselves and their workplaces in a manner that they can comprehend and follow.
I do not agree with the initial statement of cybercrime evolving. The whole mechanics of cybercrime are built on hundreds of years old principles (remember, e.g. Spanish prisoner letters, the precursors for advance fee fraud, have been documented in 16th Century, and we believe they have been around for longer). There are precious few examples where IT itself has been the sole distinction that even enabled new types of crimes. Some researchers would go as far as to say that no such example exists. Therefore, understanding the mechanics of mechanisms as old as dirt, contextualised in the existing technology, would still be the best way to go. Think about it for a second and come up with a type of cybercrime that could not exist without technology. Stealing bitcoin? Are you saying that people have not been stealing money, or land or valuables before digital currency? Pretending to be someone else to gain some sort of advantage? Grigory Potemkin called and would like to talk to you about a village … And so on. In this context, the internet or technology, more broadly, is used as a conduit, but the crime itself existed before it. One possible edge case is micro-crime – the concept postulated by David Wall, where a large number of very small crimes leads to a payoff comparable to one big crime, like robbing a bank, vs. selling something to 10 million people for 1 EUR over email and never delivering it.
Moving on to a very current topic. What ethical considerations should be prioritised when developing and deploying AI and machine learning technologies in security applications?
In a more abstract sense, I think it is almost impossible to force modern machine-learning chatbots to follow the eEthics guidelines we envision for them already. We will probably need to adapt to them and not vice-versa.
In security, I know for a fact that AI is used to create certain attack vectors. On the other hand, I know that AI is used in some aspects of intrusion detection. A tongue-in-cheek claim would be that we, humans, are becoming totally superfluous in this process. Why not have a virtual environment in which AI creates attacks against a certain target, with actual info on their security arrangements, and an AI tries to detect and counteract them, and then just let the companies know who “won” and who “lost” and either have them not be attacked for some time (if they “won”) or have them pay a foundation for black hats down on their luck a certain amount of money? Yes, we have reached that level of absurdity.
Again, I do not believe legislation will be incredibly helpful in this area; we will rather have to rethink how we interact and what real added value we humans bring to the table in security and elsewhere. Hint: it will not be how quickly we can process large amounts of data to get to a mathematically correct solution. This is the actual bit that shakes the status quo and estimation of our worth. Suddenly, creative jobs, culture, and human touch are becoming recognised as valuable contributions. I say “recognised”, because this was always the case, even though it is only slowly dawning on us. Human worth should not be calculated on how quickly someone can get you to review 300 pages of text and write a bland review of it, because I guarantee you that ChatGPT can do it quicker. So, let’s not compete, but work alongside – use AI for boring, repetitive tasks and add to them the human creative flair and our ability to think laterally.
Shifting the focus on teaching. What are the key skills and knowledge areas that future cybersecurity professionals should develop to address emerging security challenges?
When I talk to my colleagues in the security industry, they tell me that they desperately need malware analysers, and that they will hire as many as my university can produce. When I talk to local SOC managers, they tell me they need personnel to manage the consoles and be on call, as many as I can educate. All of them mean it, and all of them will pay good money to these emerging professionals to retain them. However, we should be switching focus to train INFOSEC professionals who understand the basics of psychology and criminology. Because these individuals are desperately needed too, and while you can hire a malware analyser or a SOC analyst at a pinch, if only you are willing to pay enough, I dare you to find and hire an INFOSEC professional who will understand (a) how to nudge™ employees to follow security advice, and (b) tell you why a particular phishing mail pushed just the right buttons to be wildly effective and how to increase resistance to this specific psychological mechanism of compliance.
Your collaborations span multiple continents. How do cultural and regional differences impact the approach to cybersecurity, and what can be done to strengthen trust and more effective international cooperation in this field?
To give you an example, I wanted to transfer a particular psychometric tool (measuring compliance with requests) from the UK to Japan, with the help of a Japanese colleague. It was a wild ride, and I quickly realised how unequipped most of us are even to understand that some cultures differ in their very foundations. I mean, we all understand that we are diverse, on a rational level, but I had no idea how the very building blocks of society differ. In the example above, we had to have a longish discussion with my colleague (who has a PhD in forensic science) about what “subtext” is. According to him, the Japanese culture does not recognise this concept. You either say what you mean, or you say nothing. It is impolite to say things you do not mean and rely on the other party to figure out what you really wanted to come across. At the time, he was visiting me in Cambridge in the UK, where the English hardly ever say what they mean directly, but rely on phrases and subtle established clues to let you know what they expect from you.
So yes, cultural differences impact all our relationships and actions. Why would this be any different in INFOSEC? I mean, even as a security professional, you are seen differently depending on the culture.
What to do about it? Years ago, when I was a psychotherapist in training, one of my co-trainees asked our mentor for advice on how to start a new job. Zoran, our teacher, said that the best possible thing to do at a new company is to spend at least the first two weeks being mostly quiet and approachable, but not too friendly. He said that all workplaces have cliques and camps that carry old resentments and perceived slights from other camps forward by adopting new hires into their cult. If you are not careful, you leave to chance which camp you will join, just because someone was friendly to you first. The very best thing you could possibly do, he said, is get the lay of the land, see which camp aligns more closely to your ideals and attitudes and join them by choice, as you will most definitively not remain neutral forever but getthe sucked into office intrigues eventually. And you can only do this if you remain approachable to all, for a while. By being quiet. Improving relationships and collaboration would for me, rely broadly on following Zoran’s advice, and learn the lay of the land, and then join the tribe that broadly aligns with my views. And then collaboration is easy.
Can you tell us a little bit about your keynote at Security Days 2025?
My talk will focus on the role of human attack vectors in cybersecurity starting from the observation that while these have been recognised in theory for decades, they remain poorly understood by many in the managerial and technical spheres. It also aims to highlight a cultural divide within the cybersecurity field, where technical expertise is valued above understanding the psychological aspects of security breaches, and how this gap in knowledge can lead to security failures.
David Modic’s keynote will take place on Thursday 10 April at 11:40 CET
About Dr Modic
Dr David Modic is an assistant professor, Director of Studies, and PI for various defence projects. David teaches INFOSEC and his main interest are human attack vectors. He is an EU-registered expert and an EDF reviewer, specialising in Information Security, Cyber Warfare, the psychology of security, and the ethics of intelligent systems.
Dr. Modic holds national and EU security clearance up to SECRET and is affiliated with Cambridge University, where he is a Senior Non-Residential Member of King’s College and a former research associate at the Computer Laboratory. At Cambridge, he was also the former CamCERT Social Engineering Special Advisor.
He consults for governments and organisations on cybercrime and security, in Brazil, Estonia, Lithuania, Slovenia, the UK, and various businesses.