In 2020 we cannot doubt the threat posed by phishing; we could spend so much time trying to size the risks for our organisations and society. I am not even sure I could give an accurate estimation of this risk. There is so much unknown. However, for many of us, being targeted by phishing cyber attacks is now part of our typical day or week.
By Emmanuel Nicaise, cyber security consultant with Approach and researcher at the Université libre de Bruxelles (ULB)
The GÉANT network connects 50 million people across Europe, 50 million potential targets for a phishing email. We can use many technologies to prevent phishing, but most have already shown their limits. Even if they could block 95% of phishing emails, we still need our users to detect and report the remaining 5%. Furthermore, even if we had systems that could stop 99.99% of phishing emails, they would not be able to block the more targeted, and dangerous, spear-phishing without blocking many legitimate emails. Technology cannot solve it all. As CISOs¹ often like to say, humans must be the last firewall.
Phishing is a form of social engineering using emails as a vector. Criminals mostly use phishing to achieve one of two goals: steal information (credit card numbers, computer credentials) or install malware (ransomware² or trojan³) on the user’s computer. We also see many CEO fraud attempts using spear phishing, often combined with other types of social engineering like vishing (using voice, over the phone) or smishing (using short messages).
Our trust in technology
Phishing attacks exploit human character traits to their advantages. The first one is often our trust. We tend to trust people like us who belong to our group. We belong to many groups at the same time: family, friends, colleagues, fellow students, fellow researchers, sports team players or school parents. The closest, the more intimate people are, the more we will likely trust them. So, when we get an SMS from our best friend and see her or his picture on our smartphone, we will tend to take it seriously and assume it is trustworthy (except maybe if our friend is a practical joker). It also means we trust technology enough to believe that the message we received comes from our friend’s phone. If our phone shows our friend’s picture and name, it must be her or him. Somehow, we are right to assume that, as GSM technology seems challenging to hack.
However, if we receive an email from the same friend, with the same content, we should not have the same trust in this message. Why? Well, email technology is not as trustworthy as GSM, yet. An attacker could maybe spoof our friend’s email address or use a Punycode4 to make it look like it. Even without that, we probably do not even know our friend’s email address. Most of the time, the only information we look at is the full name of the sender, not the email address. It seems stupid for cyber security specialists or people with some computer literacy. For the lambda user, the “simple” concept of a domain name might be where we lost them already. As there is no “driving license” to surf on the Internet and get a mailbox, we cannot assume any basic IT knowledge on the user side. That is normal. So, we should provide IT services that are safe and trustable. Email technology is not safe yet, but it could be. It should be. Technologies like anti-spoofing5 , DNS domain check6 , SPF7 or DKIM8 have existed for years and are still not implemented or enforced on all email servers. If they were, we would be able to trust emails like just like our SMS.
It would be tremendous progress. Still, it would not stop phishing completely. Criminals would still be able to create new look-alike domains, hijack existing email accounts or use different pretexts. We need humans to be more vigilant!
Hypervigilance
The big issue is that, although we would like people to be “human firewalls”, we did not hire them for that purpose. We want them to do their job: study, research, heal people, manage finances or IT systems. Being vigilant at all times is called hypervigilance. It is often the result of a traumatic event, but, more importantly, the underlying cause of pathologies. Being vigilant requires extra energy. It is stressful. If we are cautious with each of the 150 to 500 emails, we may receive every day, we will spend hours just doing that every single day. Even more importantly, we will be exhausted. Not a good idea. What can we do then?
Nudges
We need to make it easier to spot phishing. Many little things can be changed to help users. Usability must apply to security features too. As an example, we can make the email address visible if the contact is unknown to us, or if it comes from outside our organisation. It will help to spot the look-alike domains or the sudden change of our friend’s email address. We can tag suspicious emails, but only when we are sure they are suspicious. Below 80 or even 90% of accuracy, users seem to disregard the insight coming from automated systems (Chen et al., 2018).
Let us not forget to change the warning signal regularly to avoid habituation. When a warning signal is repeated too often, we tend to ignore it (Brinton Anderson et al., 2016). It is like when we get dressed. We feel our clothes on our skin for a few minutes, or even seconds, then we ignore it, and we do not think about it. When we change the signal, we start to pay attention again.
Learning in context
Will it be enough? Likely not. Now we need to train our users to spot a phishing email. Nevertheless, first, we need them to think before clicking. For some of us, reading email has become a habit. We go through our emails with a glance. We see a word related to something we expect, and it triggers an automated response: we click. Some experts say phishing is about influence. Well, that ignores any social event that occurs in a context. The context defines the way we will react. When we drive a car, we hit the brakes with the right foot. When we are on a bike, we use our hands to do the same thing. We do not have to think about it—the context conditions even our reflexes. That is why we need to train people in context. We did not learn to drive a car while reading a book. Nor did we learn to swim while standing on the side of the pool. We have learned the theory about it in books or from our teacher.
Then, we had to seat on the driver side or jump in the pool to acquire the required skills. Phishing exercises are the same. They are like a vaccine. They allow our users to recognise phishing emails without the inherent dangers of the real one. Like for the flu, phishing emails come in different types. We must train with all of them to be sure we will be able to detect and fight anything criminals will throw at us. As for any training, it should provide rapid feedback to users when they detect a phishing email. It should also be progressive and tailored to people. A phishing email about exam results will likely work with students a few weeks per year. It won’t likely work with people in financial services (see Goel et al., 2017, for this matter). Context is key.
Blaming is the path to failing
We should be attentive about being positive about phishing. Blaming users for something they might do by accident will not help. Even more, it might increase the workload of our helpdesk or our SOC. People might start reporting any suspicious email, spam, scam, or even internal communications to avoid making a mistake. We would then create an unnecessary burden on our first line of support. We must make our people cautious, not paranoid.
Phishing exercises do not only provide training, but they also keep our users vigilant. Just as we need to repeat flu shots, we need to repeat phishing exercises. By experience, monthly frequency seems to be the optimal one as it keeps people aware, sharp, and it just takes a few seconds of their time. This is not a high price to pay to avoid losing our network, our research work, or money.
A path
Research on phishing is still at its early stages. We have just started to understand better how it works, what makes us click and how we can improve. Still, we know very little for sure. So far, if we reduce exposure, facilitate detection and train our users, we are doing the best we can do. It is teamwork, as all those changes require the involvement of different teams, different entities even. We all have our share to do, and when we do, we will prevail.
Footnotes
1 Chief Information Security Officer
2 Malicious software encrypting the data of the computer they infected so the criminals can ask for a ransom to the owner of the computer to restore his/her data
3 Or Trojan horse. It is a software seemingly legitimate that allows the attacker to take control of the computer on which it is installed.
4 Punycode are a way to encode non-Latin characters (like Cyrillic or accentuated characters) into a domain name (like @geant.org). As some of these characters look like Latin characters, they can be used to create look-alike domains. They look like the real domain, but they are different and are managed by the attackers.
5 System preventing malicious users to send emails pretending to come from a domain, an organization, they do not belong to. Without this protection, anyone could send an email to any user @geant.org and pretend they belong to GÉANT too (while they do not).
6 Control implemented on email servers to check whether the internet domain of an email’s sender does exist.
7 Sender Policy Framework is a standard allowing email domain owners to define which servers can send emails on their behalf. It allows to prevent unauthorized people to send emails pretending to come from their domain.
8 Domain Keys Identified Mail is a standard allowing email servers to sign and authenticate email sender’s domain using a digital signature. Every email sent by the domain’s email server is electronically signed and the recipient server can authenticate it to ensure it is genuine.
References
- Bonnie Brinton Anderson, Anthony Vance, C. Brock Kirwan, Jeffrey L. Jenkins & David Eargle (2016) From Warning to Wallpaper: Why the Brain Habituates to Security Warnings and What Can Be Done About It, Journal of Management Information Systems, 33:3, 713-743
- Jing Chen , Scott Mishler , Bin Hu , Ninghui Li , Robert W. Proctor , The description-experience gap in the effect of warning reliability on user trust and performance in a phishing detection context , International Journal of Human-Computer Studies (2018), doi: 10.1016/j.ijhcs.2018.05.010
- Goel, S., Williams, K., & Dincelli, E. (2017). Got phished? Internet security and human vulnerability. Journal of the Association of Information Systems, 18(1), 22–44.
About the author
Emmanuel Nicaise is a cyber security consultant with Approach and a researcher at the ULB (Brussels). With his 25+ years of experience in IT & cyber security and a master in psychology, he is fostering cyber safety amongst organisations using psychology and neurosciences. He is currently pursuing a PhD in social psychology, focusing on trust and vigilance in our digital society. Phishing is presently his main area of research.
Read more on the GÉANT Cyber Security Month 2020: https://connect.geant.org/csm2020