Words: Daniella Vendramini, Team Lead for IT Compliance within ICT Security Services at HEAnet
“Artificial Intelligence can bring innovation and efficiency to the education sector, but only when it is guided by trust, transparency, and responsibility.”
Why AI policies matter for education and research
Artificial Intelligence (AI) is no longer a distant concept. It is already shaping how we live and work, from writing assistants and image generators to translation tools and data analysis. Across industries, AI brings new opportunities for efficiency, creativity, and discovery.
Education and research are no exception. Universities and institutions are exploring how AI can support teaching, streamline administration, and open new paths for research. For instance, staff may use AI to summarise documents or generate draft content, while researchers test AI to analyse large datasets. However, with these opportunities also come risks, such as data leaks, automated decisions without oversight, and uncertainty about fairness or accuracy.
In many conversations with our community, the same questions arise, Can we use AI safely? What rules should guide us? How do we balance innovation with compliance? These questions highlight a clear need for practical and transparent policies that help institutions use AI with confidence.
Why AI policies matter for compliance
The importance of policies is not only practical but also regulatory. The new ISO/IEC 42001:2023 standard (Artificial Intelligence Management System – AIMS) explicitly requires organisations to establish documented AI policies for their governance and risk management.
At the same time, although Europe’s first comprehensive law on Artificial Intelligence, EU AI Act, does not use the word “policy”, it sets obligations on transparency, risk management, data governance, and human oversight. In practice, these obligations cannot be met without clear internal rules and procedures.
Alongside international standards and European regulations, we also look to national best practice. In Ireland, the Guidelines for the Responsible Use of AI in the Public Service, published on 7 May 2025 and last updated on 5 August 2025, provide clear principles for transparency, accountability, and fairness. These guidelines are supported by the AI Guidelines and Resources for the Irish Public Service, which help public bodies adopt AI responsibly while protecting trust and integrity.
As Ireland’s National Research and Education Network (NREN), HEAnet ensures that our approach to AI policies is consistent with this national guidance. This not only supports compliance but also strengthens the culture of responsible innovation across the education and research sector.
Managing AI risks in practice
The EU AI Act takes a risk-based approach, classifying AI systems into unacceptable, high, limited, and minimal risk categories. High-risk systems must meet strict requirements, such as documentation, transparency, human oversight, and risk management processes.
For institutions, this means that understanding which category an AI tool belongs to is essential before adoption. Tools used informally or without approval, often referred to as shadow AI, present a challenge. Even if they seem harmless, they can create compliance risks and undermine trust if not properly governed.
This is why policies must be paired with visibility: mapping what AI tools are being used, for what purpose, and whether they meet the required standards. In future, detection tools may also be necessary to identify and manage unauthorised AI use.
Staff cannot claim “AI made me do it.”
Finally, clear policies reinforce that accountability always remains with people, not with the technology. Staff cannot claim “AI made me do it.” Just as with any other system, users are responsible for how AI is applied. Strong policies help to protect institutions by confirming that responsibility lies with the person in control, not the tool.
Collaborating with our clients
Within the ICT Security Services (ICTSS) Team at HEAnet, we support clients by developing and reviewing information security policies, running awareness sessions, and helping map risks. When it comes to AI, our role is to turn uncertainty into clarity.
Together with education and research organisations, we have worked on:
Acceptable vs. unacceptable use: Defining what Artificial Intelligence can and cannot be used for, such as allowing brainstorming or administrative support, but prohibiting use with personal data or final decisions without oversight.
Transparency rules requiring disclosure: Making sure that whenever Artificial Intelligence is used, this is clearly acknowledged. Whether in teaching materials, administrative reports, or research drafts, disclosure builds trust and allows others to judge the reliability of the work
Human oversight: Keeping people in control of decisions that affect teaching, administration, or research. Artificial Intelligence can support efficiency, but human judgement is essential to protect fairness, accountability, and trust.
Risk-based approaches: Aligning with EU AI Acts and other regulations, as well as international standards, by classifying Artificial Intelligence systems as minimal, limited, or high risk, with high-risk tools requiring additional checks.
Data protection: Making sure no personal or confidential data is shared with Artificial Intelligence tools. Protecting students, staff, and research data is fundamental to building trust in how AI is used.
To move from principles to practice, we also recommend the use of practical tools such as checklists, decision matrices, and flowcharts. These tools translate complex requirements into simple steps, helping staff quickly decide if and how they can use AI in their work. For example, a checklist may ask: Are you using personal data? Is there human review? Is the tool approved? By answering these questions, staff can move from uncertainty to clarity quickly and with confidence.
“When it comes to AI, our role is to turn uncertainty into clarity.”
Structuring an AI management policy
When supporting institutions, the ICTSS team follows a structured approach inspired by the EU AI Act and ISO/IEC 42001 (AI Management System, AIMS). This is not a mandatory model, but a flexible framework that can be adapted to each organisation’s context and maturity.
The aim is to make sure policies are aligned with recognised standards, while remaining easy for staff to understand and apply. A practical structure usually includes:
AI Management Policy
We always remind institutions that there is no single “correct” model. What matters is creating a policy that works for their culture, resources, and readiness.
For institutions starting this journey, we suggest some simple steps:
AI Management Policy steps
Always leading by example
At the same time, we know that credibility comes from practice. That is why HEAnet also introduced its own internal AI policy this year. By doing so, we show that we hold ourselves to the same standards we recommended to others. This dual perspective, as a service provider to clients and as an organisation applying AI ourselves, gives us unique insight into what works and what challenges still remain.
Lessons and tips
One of the lessons we have learnt is that a policy alone is not enough. Staff need to know about it, understand it, and feel supported in applying it. Leadership is vital in this process. At HEAnet, our Security Services Manager reminded staff that failure to follow AI policy could lead to disciplinary action, and that deliberate misuse violating privacy, safety, or ethics would be taken seriously. At the same time, we ran awareness sessions where colleagues could ask questions and raise concerns. This balance of clear leadership and open dialogue helped ensure that the policy became more than a document, it became part of our culture. Looking back, here are five lessons we would share with other organisations beginning their AI policy journey:
Extending the conversation
Earlier this year, at our Client Security Forum , we placed Artificial Intelligence at the centre of discussion with our partners. Together we explored opportunities and risks, from AI-powered phishing and password-cracking to the absence of clear rules in many institutions.
What resonated most was a simple checklist: Who is using AI? What for? And is there human oversight? This proved to be a powerful way to start the conversation. It showed that governance and awareness must always go hand in hand.
Moving forward together
From our work with institutions and our own internal journey, we have learnt that AI policies are more than compliance. They are about building trust and creating conditions where innovation can happen safely. The education and research community will continue to face new challenges as AI evolves. However, by sharing lessons and supporting one another, we can ensure that AI becomes a positive force for progress.
This is our call to the GÉANT community: let us lead by example, learn from each other, and build a culture of responsible AI together.
About the author
Daniella Vendramini is Team Lead for IT Compliance within ICT Security Services at HEAnet, with over a decade of experience in cybersecurity governance, risk, and compliance across education, government, healthcare, and finance sectors. Before joining the HEAnet, Daniella held senior roles in Ireland, France, Portugal and Brazil, advising organisations on international standards and regulations, including ISO/IEC 27001 and 223001, GDPR, and DORA. Her work has ranged from policy development and audit readiness to privacy strategies and supply chain risk management. Daniella is passionate about building a strong security culture and supporting the research and education community in safely and responsibly adopting emerging technologies, such as AI.
GÉANT Cybersecurity Campaign 2025
Join GÉANT and our community of European NRENs for this year’s edition of the cybersecurity campaign: “Be mindful. Stay safe.” Download campaign resources, watch the videos, sign up for webinars and much more on our campaign website: security.geant.org/cybersecurity-campaign-2025/
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behaviour or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.