What does secure and ethical use of AI look like in practice in higher education and research?
Marlon Domingus, data protection officer and AI lead at Erasmus University Rotterdam, believes that a proactive and critical approach is key to ensuring the responsible use of AI. He invokes physicist Richard Feynman:
“Embrace AI as much as you can, but make sure you don’t fool yourself — or others. AI is a good assistant, but the real understanding and critical thinking has to come from you.”
For educators and researchers looking to integrate AI into their work, Marlon recommends practical steps to protect data, systems, and core ethical principles:
1. Verify AI-generated insights
Don’t blindly trust AI outputs — always verify sources and context. “In science, it is usual to, in principle, challenge every truth claim, to test its validity. With the growing societal impact of AI, this should also become common-sense practice for daily life.”
Marlon uses the AI tool Perplexity because it gives sources for its answers. “But even then, I double-check the facts, because I am always accountable for the validity of my statements.”
2. Think critically
AI can help spark creativity and get people out of ‘bubbles’ of thinking by presenting different perspectives — but critical thinking is vital to distinguish value from fluff.
“AI outputs can look dazzling, but you only benefit from them if you understand the domain and the topic well enough to distinguish the nonsense from the genuinely valid insights.”
3. Configure AI tools with security and privacy in mind
Before using AI tools, ensure you fully understand how they work, what data they collect, and where that data is stored.
Set them up thoughtfully — don’t rely on automated decisions or default settings to ensure compliance with security and data privacy standards: “Don’t outsource your critical thinking to a black box.”
Read more about Marlon’s perspectives on navigating AI ethics and data privacy here.
About Marlon Domingus
Marlon Domingus is the data protection officer and AI lead at Erasmus University Rotterdam. He is a philosopher in the world of privacy and AI, and speaks and writes about governance, ethics, moresprudence, risk management, privacy by design and privacy engineering.
Also this year GÉANT joins the European Cyber Security Month, with the campaign ‘Your brain is the first line of defence‘. Read articles from cyber security experts within our community, watch the videos, and download campaign resources on connect.geant.org/csm24