How can universities embrace AI-driven innovation while safeguarding privacy, data security, and strategic autonomy?
As artificial intelligence (AI) continues to reshape the higher education and research sector, this question is becoming increasingly important. Universities must not only innovate but also maintain their role as trusted custodians of ethical values, privacy, and cybersecurity.
For Marlon Domingus, data protection officer and AI lead at Erasmus University Rotterdam, addressing these challenges while harnessing AI’s potential is at the core of his work.
“Both privacy and AI are very dynamic topics, and not much is clear and settled yet, legally or technically. Fortunately, because AI is cool right now, people are willing to talk about it, even the tough ethical and security questions.”
Here, Marlon discusses how AI is transforming data protection in education and research, how universities can rise to the ethical challenges it presents, and why thoughtful, responsible use of AI is more important than ever.
AI’s impact on data protection in education and research
Marlon identifies three main areas where AI is affecting data protection and data governance strategies and practices in the education and research sector:
- AI algorithms in research: AI is enabling new capabilities, like personalised medicine, through research-focused algorithms.
- AI tools in education: AI tools, such as large language models (LLMs) like ChatGPT, raise concerns about plagiarism and require rethinking of assessment strategies.
- Procurement and use of AI-powered platforms: Increased reliance on external platforms raises questions about data security, transparency, and strategic autonomy.
Each of these areas presents unique opportunities and challenges. Marlon explains more below:
1. AI algorithms in research: Breakthroughs and data protection
At Erasmus University and elsewhere, AI is used in developing algorithms for research, including for public-private collaborations in fields like logistics, crime prevention, and healthcare.
“Because of AI, we can now do things people have been wanting to do for years, like delivering personalised medicine. These advances have a big impact on society, and also on the way we do research.”
However, increasing reliance on algorithms means ever-larger amounts of data collected and processed with AI, which introduces new security risks. As research processes evolve, data protection strategies must be updated to ensure adequate safeguarding of individuals’ personal information and sensitive research data.
2. AI tools in the classroom: Redefining teaching and assessments
AI tools like ChatGPT are becoming an integral part of university life, used for everything from lesson planning to student engagement.
“LLMs are having a big impact on education. We are redefining how we see education, how we want to test and assess students’ knowledge and skills,” Marlon says.
Although initial responses to the widespread arrival of LLMs focused on plagiarism concerns, the benefits such tools offer soon became clear.
“We’re seeing the good these innovations bring: how teachers use tools like ChatGPT to create more tailored and engaging lessons for students from different disciplines, for example. It takes more preparation time, but one could say it also provides better education.”
AI’s potential to deliver more personalised and tailored learning experiences is particularly promising, although Marlon is mindful of the risk that this could lead to greater isolation.
“The challenge is to avoid creating a society of disconnected individuals, each with their own AI assistant. We need to make sure we don’t lose the human connection in education. It’s not just about hedonism 2.0.”
Erasmus University has developed its own Erasmus Language Model (ELM), involving students in every stage from data collection to application. ELM helps students understand how LLMs work, their carbon footprint, and energy use. Using licensed publications avoids intellectual property issues.
3. AI-powered platforms: Procurement and strategic autonomy
Universities’ growing reliance on external AI platforms — many of which are based outside Europe and may not align with GDPR standards — raises questions about data security, transparency, and strategic autonomy.
These tools often come as black boxes, with algorithms that aren’t fully documented or transparent. This makes it difficult for universities to fully understand what data is being collected, how it’s used, and whether it’s being adequately protected. The lack of transparency and control makes universities more vulnerable to cyber attacks, data breaches, and other security threats.
“We’re all increasingly dependent on external solutions, especially from the US,” Marlon says. “That triggers really important discussions about strategic autonomy and what it means for our data security.
“One upside is that we’re redefining things as we go along. And the European AI Act helps in navigating these challenges.”
Ethical responsibilities and security by design
Marlon advocates adopting a ‘security by design’ approach, ensuring cybersecurity is built into AI systems from the start, informed by rigorous ethical standards.
“Society expects academia to be trustworthy, ethical and professional. Our publications and advice are trustworthy, and we can always reveal our sources and verify our claims,” he notes. This approach not only upholds academic rigour but also helps combat the spread of false information from LLM hallucinations.
Transparency, data privacy, and accountability are key components of a proactive, ethical approach. Legal compliance alone is not enough to maintain universities’ commitment to academic integrity and the public good.
“Our ethical position is that we are not a business, we are here for the public interest, so the things we do should not conflict with these principles. Many things that are technically possible are also legally possible, but some fall under “lawful awful” — legal, but not ethically sound.”
“At Erasmus University, we’ve developed a moresprudence framework to guide our decisions around AI use,” Marlon explains. “This isn’t just about meeting legal requirements — it’s about ensuring that the systems we use are secure and ethical.”
For instance, when Erasmus University considered using learning analytics on digital platforms, it wasn’t as simple as flipping a switch. Marlon, his team, and a group of stakeholders (students, teachers, researchers and policy makers) spent years discussing and iteratively testing how to balance the benefits of learning analytics with student privacy concerns.
“When people first suggested turning on the learning analytics feature, we had to ask: What’s the purpose of this data collection?” Marlon says.
“Is it to improve the quality of a course? Is it for early identification of students who require specific support or additional challenges? Is the data for your question available in the learning environment; were events created that collect relevant information? Do we ask relevant and ethically sound questions?
“We eventually created a privacy and ethics board to guide these decisions, ensuring that cybersecurity concerns were front and centre.”
Setting standards for broader ethical AI use
Developing and assessing these types of methodologies, frameworks, and tools is one area in which universities can play a pivotal role. This is not only about ensuring their own commitment to ethical and secure AI use but also supporting and equipping others to do so, within and beyond the education sector.
For instance, Marlon notes that framings like ‘data-driven work’ shouldn’t be accepted at face value, as inferences from available data often lead to questionable outcomes.
“Usually, data doesn’t ‘show’ much on its own. One must first ask a relevant question, decide which data would be relevant to answer it, and assess the available data’s quality.
“Then we need to understand the relation between the question and the data: have we identified causation or correlation? If it’s merely a correlation, we are on a slippery slope which may cause bias because we’re using irrelevant data to form a decision about a student’s work,” he adds.
“Thoroughly understanding the issues and developing appropriate approaches, methodologies and tools takes a lot of work, and is costly. But this is an area where universities have an advantage — we don’t have the commercial sector’s pressure for a fast time to market.”
Raising public critical awareness of AI
Increased public awareness of AI’s potential dangers has been shaped by recent controversies, such as the kinderopvangtoeslagaffaire (child benefits scandal) in the Netherlands. Flawed government algorithms wrongly accused families of committing fraud, leading to financial hardship for thousands and broken families due to biased, unverified data and incomplete profiles.
“Now people are more aware of the potential effects of AI and algorithms on society. They don’t feel protected by their government anymore,” Marlon says.
This case, along with the rise of AI-driven marketing tools and strategies, has spurred greater public awareness that AI isn’t inherently neutral or benign in effect. “We use AI to understand the world and ourselves, but AI should be in service of humankind.”
“Students and consumers, especially, are taking a more critical position towards algorithms. While tools like ChatGPT can feel friendly and almost human, people also see the ugly side of algorithms when used badly. It’s a big advantage that people have this critical awareness of AI.”
Universities as AI innovators and ethical leaders
For Marlon, the future of AI in higher education and research will depend on universities’ ability to lead by example. This means embracing innovation while setting rigorous ethical standards, embedding cybersecurity and privacy by design into every AI deployment that uses personal data and/or strategic information, and fostering a culture of critical thinking around new technologies.
By taking a proactive, thoughtful stance on AI ethics and security, universities can bridge the gap between technological potential and societal impact. Through collaboration between technical, ethical, and legal disciplines, and by sharing best practices and tools, they can set the standard for responsible AI use — both within their institutions and across other sectors. This requires asking difficult questions across all levels — people, technology, and processes — and not getting lost in the lingo of each discipline.
In an era of rapid AI evolution, universities have both the opportunity and the responsibility to shape how AI integrates into our world — ensuring it serves as a force for positive change.
About Marlon Domingus
Marlon Domingus is the data protection officer and AI lead at Erasmus University Rotterdam. He is a philosopher in the world of privacy and AI, and speaks and writes about governance, ethics, moresprudence, risk management, privacy by design and privacy engineering.
Also this year GÉANT joins the European Cyber Security Month, with the campaign ‘Your brain is the first line of defence‘. Read articles from cyber security experts within our community, watch the videos, and download campaign resources on connect.geant.org/csm24