The rapid evolution of Artificial Intelligence (AI) is reshaping how organisations approach risk, resilience, and digital trust. But what does today’s AI environment really mean for GÉANT and for the security of the services that NRENs rely on every day? In a new white paper, GÉANT examines these questions with clarity and focus.
AI now sits at the centre of global technological change. LLMs and agent-driven AI systems promise efficiency and automation, while at the same time raising concerns about accuracy, control, and exposure to new threats. In this context, GÉANT carried out a detailed analysis of how current AI capabilities are being used in both offensive and defensive security, and what this means for GÉANT’s future security posture.
The resulting white paper, Effects of the Current AI Ecosystem on Future GÉANT Security by Scott Campbell, Security Architect at GÉANT, emphasises a crucial starting point: AI is just another technology and must be evaluated for its real benefits, real limitations, and real risks.
Where AI is making a difference today
The paper reviews a wide range of security reports and industry studies and finds that AI is not, as often claimed, enabling a wave of sophisticated new cyberattacks. Instead, most malicious use of AI falls into one powerful category:
AI-enhanced social engineering
Phishing, vishing, and impersonation are where AI gives attackers genuine leverage. LLMs can generate convincing, personalised emails in seconds. Voice-cloning tools can create realistic phone calls that imitate colleagues or support staff. Controlled experiments have shown that even trained individuals can be misled by AI-driven interactions costing less than a euro to produce. For an organisation as interconnected as GÉANT, with distributed teams, cross-institution workflows, and high-value access credentials, this rising sophistication represents a tangible threat.
Malware and reconnaissance
AI-assisted malware generation remains limited, but is improving. Meanwhile, experimental tools for automated vulnerability discovery can help attackers rapidly analyse systems at scale. These are not yet widely effective, but the trajectory is noteworthy.
AI “Personal Assistants”
The biggest emerging concern may be the growing ecosystem of AI assistants that read emails, calendars, messages, and documents. Granting these tools unrestricted access to personal or corporate accounts concentrates risk in ways that traditional controls cannot fully manage.
Is AI mature enough to defend?
Despite large expectations, defensive uses of AI remain immature. Many security applications require determinism, precision, transparency, and auditability, qualities that LLMs do not naturally provide. While AI can help with summarising logs or automating routine tasks, fully autonomous defensive systems remain impractical. The report advises caution: solutions should only be adopted when they solve a clearly defined need and when their effectiveness can be measured.
Implications for GÉANT’s security environment
GÉANT’s security responsibilities extend across the CISO team, GÉANT CERT, the SOC, Trust & Security, Digital Services, data protection, and governance. The white paper highlights several areas where AI-related risks intersect with policy and practice:
Stronger, phishing-resistant MFA
As impersonation threats grow, MFA becomes even more essential. Legacy methods such as SMS or email codes are increasingly vulnerable. Hardware tokens and modern authentication standards offer far stronger protection and clear and consistent verification processes. Attackers thrive on ambiguity. Standardised workflows for password resets, approvals, and high-risk requests reduce opportunities for manipulation, especially when AI-generated messages mimic legitimate communication.
Better visibility of cloud applications
Many cloud services used do not yet implement robust authentication or generate sufficient security logs. Improving visibility and classification will help staff distinguish legitimate tools from fraudulent ones.
Updated training and awareness
Users must understand how AI-generated phishing, deepfakes, and impersonation attempts differ from traditional scams and how to verify suspicious communications safely.
Watching AI-driven tools as they evolve
Most AI-enabled attacks today are simple. But the pace of development is rapid, and maintaining situational awareness is essential.
Charting a safe path forward
The white paper concludes with a pragmatic message: AI should not be adopted for its novelty, but for its proven value. The core of security – strong authentication, reliable processes, informed users – remains unchanged. What AI changes is the shape of certain threats, especially those relying on human trust and communication. The report emphasises a crucial truth: technology alone is never enough. Human-centric security remains essential, especially as AI makes misinformation and impersonation easier. By approaching AI with curiosity, caution, and a commitment to evidence rather than hype, GÉANT and Europe’s NRENs can position themselves not just to react to emerging risks, but to lead in shaping secure, resilient, future-ready digital infrastructure.







