In Focus Trust and identity

A look at new advancements in privacy and trust in six new case studies by NGI_Trust

NGI_Trust - six new case studies advance privacy and trust

NGI_Trust, the Next Generation Internet (NGI) project fostering the development of a human-centric internet through privacy and trust-enhancing technologies, announced the release of a new batch of six case studies, which showcase the impact of some of the most innovative among the 57 projects it supported.

The projects outlined by the case studies cover five of the twelve thematic areas in the NGI_Trust portfolio, namely Better Privacy, Safer browsing, Impact of AI, Effective Identity, Securing the Internet of Things.

This second round of releases follows from a first group of NGI_Trust case studies published in March 2021 – also featured on issue 36 of GÉANT’s CONNECT magazine – which included the projects PyGuard, Kelp.Digital (ex. Sensio), and CAP-A.

Across the case studies, all the projects highlighted that support by NGI_Trust and by the Next Generation Internet (NGI) was an essential part of the success of their initiative. Not just for the funding received, but also in terms of guidance, coaching, consulting, access to relevant events, training, webinars, connections, and much more.

Read the full case studies on the NGI_Trust website or discover more about each project below:

MidPrivacy – Identity Provenance as a first step towards personal data protection

When it comes to identity management and personal data protection, perhaps the most important part of the metadata is provenance information. That is why Evolveum’s open source identity management and governance platform MidPoint seems to be positioned at an ideal place in the personal data flows.

With support by NGI_Trust, Evolveum implemented a provenance prototype as a part of the midPrivacy initiative, a long-term effort to extend midPoint with a complete set of personal data protection features.

MedIAM – Open source pilot implementation of secure medical IoT devices

According to cybercrime magazine, “healthcare suffers 2-3X more cyberattacks than the average amount for other industries”, because the data has more value for hackers. Cyber regulations such as the EU cybersecurity act provide mandatory requirements to protect sensitive information and systems. However, it remains difficult to extend that line of requirements to connected devices people carry around as part of their treatments. If those medical devices aren’t properly secured, people may unknowingly be broadcasting their health status, as well as many other personal sensitive data, everywhere they go. Or even be directly harmed by hacked devices.

MedIAM provides an open source pilot implementation on how an equipment vendor should protect the functions and data of their medical IoT devices.

Better Internet Search – The ISIBUD Project

The ISIBUD project from Better Internet Search was run in collaboration with Edinburgh Napier University and it was completed in July 2020. This resulted in a demonstrator for a unique ad-free privacy-preserving search engine being tested with 100+ live users.

The success of this project has led to the search engine being publicly released as an MVP, which continues to be developed by the company partly supported by a second grant from NGI_Trust. A future release with blockchain – used to secure the token-based economy – is planned for summer 2021.

FAIR-AI: Designing human-centric AI to enable fairness assessments of texts

One of the pressing questions in contemporary AI, is how to make sure that its implementation is fair. But what is it that makes an act fair or unfair, and is it possible to program an AI to be able to detect and use ‘social responsibility’ as humans do?

Led by the University of Cambridge and based on early research carried out by Dr Ahmed Izzidien, the FAIR-AI project worked on the development of a ‘fairness vector’ allowing AI to read sentences and score their fairness.

Deep-Learning / SensifAI: Smart-enhancing videos and images on-device while fully preserving privacy

There are many image enhancement apps that improve the quality of images or edit them automatically through advanced artificial intelligence. These apps work based on deep-learning that is computationally heavy and requires strong GPU servers, and as such they require users to send their images to their cloud to process them. This increases the risk of getting hacked, exposed, or abused and potentially violates the privacy of millions of users.

SensifAI developed specific deep learning architectures for the new NPU chipsets of most major smartphone manufacturers, together with an on-device smart-enhance app, that allows users to enhance images and videos locally on their mobile phone, while guaranteeing control over their personal data.

CASPER 2.0 – An AI-based ghost protecting children from online threats

Privacy also means protection from threats that we are (still) unable to fight effectively. That’s why vulnerable groups of Internet users, including children and elderly people, need their privacy to be respected.

The main aim of the CASPER project was to develop an application-agnostic solution based on AI for filtering inappropriate content from online communications, to protect children and other vulnerable groups of users.

Read the complete case studies developed by NGI_Trust and its family of projects here: https://wiki.geant.org/display/NGITrust/Case+Studies