In Focus Security

Outsmarted by a supreme being?

The hype curve on artificial intelligence has reached unprecedented peak among investors, governments and developers alike. And yes, isn’t it amazing to get professional sounding advice, which you can use as your own, gaining respect for your skills and wisdom? But is AI really the Pandora’s box that will enslave and stupefy people?

Us working in cybersecurity have the reputation to have tried to spoil many amazing innovations by highlighting risks, regulations and compliance. The list is long and includes personal workstations, mobile devices, cloud services, federated identity management – just to mention a few of the earlier ones. But give me however the chance to explain what AI-related risks – I believe – we should be aware of. If you don’t trust me, you can always challenge me by asking the AI prompt to tell you the very truth on such risks.

The AI revolution might seem to be embedded in a cloud of hype, but where’s the beef? Security thinking is actually very straightforward, you find the beef by recognising the assets to protect, risks and feasible security controls to be applied. In the context of AI, the asset is our data and the resources spent to use the service.

The obvious risks vary from unconsciously leaking our data to other parties, getting biased by the prompt, to becoming irrevocably dependent of the service. Risks assessments should also include positive aspects, although those are seldom included. There are of course huge opportunities and benefits to develop productivity and competencies by utilising AI.

In principle, everything sounds like adapting to yet another new technology. But there is a catch.

During the good old times you could always suppose, or at least hope, that you were much smarter than the bad guys, as long as you used proper passwords, didn’t click on fishy links and had your gear reasonably patched.  In my home country, Finland, we were frequently protected by our exotic non-Indo-European language, which revealed to be a fraudster when using automatic translation for phishing emails. Sometimes a careless or confused person could become a victim of a cyberattack, but the majority have been fairly safe.

The proven ancient fraudulent methods are still universal, people can always be deceived by the promise of easy money, promotions, fame and love.  Emotionally we have not evolved much, if at all, from our ancestors in Africa who started to migrate to different regions a long, long time ago.

Although I picture myself reasonably well-skilled to identify frauds, I must admit that once I almost fell for an attack when the fraudster had found my weak spot. Years ago, I began receiving emails from a mysterious and supposedly stunning young woman in Russia who claimed to be madly in love with me. I wasn’t particularly impressed. But then one day, she told me that every time she joined a man on a fishing trip, he’d get an extraordinary catch. To prove it, she even sent a photo of five large pikes – each weighing over 10 kilos. I have to admit, she nearly had me. For a moment, I actually considered to send her money for a train ticket from Novosibirsk to Helsinki.

For AI related threats the game has changed.

On the user and consumer side, I am sure that people do not use almost any precautions about what they disclose to the prompt. A lot of secure psychology is involved here. People don’t feel that the prompt is a semi-hostile external service, but their true and trusted friend who has the most genuine and sincere reasons to support and comfort you.

Wrong.  By default, the old truth for at least free cloud service is that you are not the customer, you and your data are the pig in the barn to be butchered and sold.  This is true also about most ‘free” web resources that collect your web cookies to  be shared with global ad tech networks and brokers, which collect meticulous data on our behaviour because opting-out from non-essential cookies has been made such a tedious process.

When using AI, the question is, how can we know if our prompt history is used to fine tune the LLM (Large Language Model) the AI is based on in addition to context-learning and user -level personalisation? A new type of attack vectors, called prompt ware, are also developed to compromise the confidentiality and integrity of the LLM. Be susceptible on how well your prompt history is isolated and protected.

You can become a bit paranoid when thinking about all of this. I have used both LeChat and ChatGPT to find suitable wordings when writing this blog. When I started to write about the risk, I got the message “Unusual activity has been detected from your device. Try again later.”

Another threat is so called RAG poisoning where someone deliberately injects malicious or biased content into the LLM retrieval corpus. Previously LLMs had a cut-off date for feeding new information into the model, but currently most services are using  online searches on demand called Retrieval-Augmented Generation (RAG) to provide current information. As a matter of fact, the LLMs are already now better search engines than the actual search enginges as search engine optimation (SEO) has poisoned the search results with commerical jibberish. But, trust my word, it will not take long before also the LLMS are poisoned with intentionally biased information. You have been warned about the risks. Your security controls are about the use of common sense and precautions on what you feed to the LLM. Also, I recommend, if possible, to prefer European AI services, where integrity of user data and privacy is traditionally respected, although mistakes, omissions and even abuse can happen everywhere.

When I asked LeChat to tell me about myself, it politely told me that it is not allowed to disclose information about identifiable natural persons. ChatGPT told me everything it could find. I didn’t even dare to ask what Grok thought of me.

Never enter personal, private or sensitive data in the prompt. The prompt is not your true friend, notwithstanding the enormous amounts of very useful information it can share.

From an organisation and a policy makers point of view the risks are more dire, as you are liable for protecting the data of your users. AI could generate disruptively advanced and efficient cyberattacks learning from vast data sources, data entered by users and APIs could be exploited or exposed, but AI can also be used as a tool to improve and enhance our current security controls and procedures. But this can be a topic for another blog.

I think we can say that there is no smoke without fire, AI and the LLMS can provide previously unthinkable opportunities for the development of mankind. But there are also huge caveats, AI can also exploit and destroy. As a remedy, we should apply common sense and decency, enforce reasonable regulations and policies to mitigate risks.

As Dallas Korben tells the supreme being Leeloo in the 1997 movie ‘The Fifth Element’: I love you. I need you. Without you, there is no hope for any of us.


About the author

Urpo Kaila
Urpo Kaila, Head of Cyber Security Policies at CSC

Urpo Kaila is the Head of Cyber Security Policies at CSC – IT Center for Science Ltd. and member of the steering group for GÈANT interest group for information security management. Urpo is also chair for the EOSC Cybersecurity Subgroup, security officer for LUMI supercomputer  and particpates in a security roles in EuroQCI and HPC collaborations. He has an extensive history in promoting understanding and skills on cybersecurity  and data protection.

 

 


GÉANT Cybersecurity Campaign 2025

Join GÉANT and our community of European NRENs for this year’s edition of the cybersecurity campaign: “Be mindful. Stay safe.” Download campaign resources, watch the videos, sign up for webinars and much more on our campaign website: security.geant.org/cybersecurity-campaign-2025/

 

Skip to content