Community News Security

Cybersecurity: where do universities and research stand?

Picture by Freepik

By Carlo Volpe, Head of Communications at GARR

From the frontiers of AI to collaboration with public institutions and companies: here’s how the world of universities and research is tackling security challenges.

In the field of ICT, the focus on security is growing, as is the complexity behind it. During the GARR 2024 Conference in Brescia, experts discussed the current cyber landscape and the role the academic and research communities can play in supporting the prevention, detection, and response to cyber-attacks.

With distinguished guests from academia and public administration, a roundtable discussion moderated by Ilaria Comelli, head of the IT Security Organisational Unit at the University of Parma, highlighted how collaboration between different entities is a fundamental key to achieving significant progress in protecting digital infrastructures and evolving defensive techniques. In this field, AI, now indispensable, can provide concrete assistance not only to develop more sophisticated techniques and conduct better analyses but also to support the training of new professional figures.

An ecosystem open to collaboration: universities, research, public administration and businesses

University labs, in partnership with private companies and government institutions, often serve as incubators for innovative ideas and new approaches to cybersecurity. These partnerships are essential in a field where the complexity and constant evolution of threats demand ongoing updates and new skills.

Professor Simon Pietro Romano from the University of Naples Federico II presented real-world examples, such as collaborations with the Antimafia National Directorate (DNA), where the University of Naples and other universities, united under the National Cybersecurity Laboratory of CINI (National Interuniversity Consortium for Informatics), are working to redesign the digital processes that enhance the security and efficiency of DNA’s IT infrastructure. These collaborations merge academic theory with the practical needs of government bodies and companies, resulting in effective and innovative solutions.

“We’ve long realised that as professors dealing with security, we can’t just be academics. We need to contribute practically, and this changes how we teach in the classroom,” said Professor Romano.

AI as a tool in cybersecurity

The adoption of AI has fundamentally changed how cyber-attacks are managed. With its ability to analyse vast amounts of data in real time, AI can identify abnormal behaviours, detect vulnerabilities, and prevent threats before they cause significant damage.

One promising area is the use of technologies that simulate vulnerable environments, like honeypots and darknets, to monitor cyber-attacks. As explained by Professor Marco Mellia from the Polytechnic University of Turin: “AI can be used to analyse millions of data packets gathered from honeypots, identifying new attack patterns and improving defence against botnets and other threats”. At the Polytechnic University of Turin, they’ve also seen promising results in using large language models (LLMs) to predict malicious IPs based on their registration sequences and in combating cybersquatting, where fake domains are created to deceive users. In this case, LLM techniques can identify over 70% more domains that phonetically resemble legitimate ones, helping to prevent their activation.

A critical aspect of AI’s role is its ability to support human operators in decision-making. In cybersecurity, it’s not enough to just block a threat; understanding the reasons behind it is essential. As highlighted by Professor Stefano Zanero of the Polytechnic University of Milan, AI must be “explainable” — capable of transparently explaining why certain decisions were made, ensuring that AI-driven choices are reliable and understandable.

Beyond research, AI also impacts organisational tasks. As noted by Roberto Caramia, Head of CSIRT Italy at the National Cybersecurity Agency (ACN), AI tools reduce analysts’ workload by automating repetitive tasks, like threat classification and alert management. This frees up skilled resources and accelerates the onboarding process for new personnel, which is increasingly necessary.

To defend well, you must know how to attack

Despite the potential of AI and successes in some areas, challenges remain due to the sheer volume and complexity of threats and how quickly they evolve. Hence, we talk about active training in learning systems, as conditions constantly change. A malicious IP, for example, might no longer be harmful after just a few minutes, necessitating continuous updates to defence techniques.

Prevention requires a deep understanding of what you’re up against. Simon Pietro Romano explained it well: “The ability to create algorithms capable of autonomously generating malicious code represents one of the sector’s new challenges. The goal isn’t just to protect systems from existing threats but to develop tools that can anticipate and prevent future attacks.”

Achieving this doesn’t just require being a good defender but also understanding offensive techniques. At Federico II University of Naples, encouraging results have emerged in research on software exploits — identifying code vulnerabilities, which demands advanced programming skills and a deep understanding of operating systems, capabilities that AI already possesses. The study took a step further by allowing two LLMs (Meta’s LLaMA-2 and OpenAI’s ChatGPT) to interact autonomously, without human intervention, to generate effective attack codes. Tasking AI with creating malicious code opens up new scenarios for understanding adversaries’ strategies.

The balance of power between attacker and defender was also addressed by Stefano Zanero, who pointed out that everything depends on the attacker’s motivation: “If the attacker is financially motivated, defence is relatively simple: just make the attack unprofitable, and you can raise security levels proportionally to the size of the organisation. However, if the attacker is a state actor, as in cases of cyberwar, or has sabotage or influence campaign goals, defence becomes much harder, as attack models don’t align with the defensive budgets of companies or critical infrastructures. In this scenario, total prevention becomes almost impossible, so it’s more accurate and realistic to talk about resilience — the ability to withstand and survive the attack”.

Training the next generation

A crucial aspect, affecting the entire ICT sector, is training the next generation of experts. The shortage of specialised professionals is one of the main challenges, and universities have the responsibility to bridge this gap. Some initiatives are already in place, such as the “Hackademy”, promoted by the University of Naples Federico II in collaboration with industry, or dual-use projects — with both civilian and military purposes — where universities work closely with defence entities to simulate cyber-attacks and test defence methods. These collaborations create a highly skilled workforce, ready to manage both commercial and national security threats.

These are significant efforts, but much remains to be done, especially as the speed at which threats need to be addressed requires us to move faster than ever.


Carlo Volpe, GARRAbout the author

Carlo Volpe is the Head of Communications at GARR, the Italian Research and Education Network that provides ultra broadband connectivity to the community of education, research and culture. He handles institutional communication and activities of relationships with users. He looks after the aspects of media relations, concept and the layout of graphic information materials, web and editorial content, corporate and institutional events.


Also this year GÉANT joins the European Cyber Security Month, with the campaign ‘Your brain is the first line of defence‘. Read articles from cyber security experts within our community, watch the videos, and download campaign resources on connect.geant.org/csm24

Skip to content