Aarhus University Seal

Making AI Explainable: AU researchers awarded Villum Synergy grant

Artificial intelligence is increasingly used in decisions that affect people’s lives – from medical diagnoses to financial assessments. Yet even experts often struggle to explain how complex AI models reach their conclusions. This “black box problem” is one of the biggest barriers to responsible AI.

Picture of Ira Assent and Rune Nyrup
Professor in Computer Science Ira Assent and Associate Professor in Mathematics Rune Nyrup will collaborate on a new Villum Synergy Grant. Photo by Søren Kjeldgaard and DFF.

A new project at Aarhus University aims to change that. With support from a DKK 4.5 million Villum Synergy grant from the Villum foundation, researchers from computer science and mathematics will develop methods to evaluate whether AI explanations are not only technically sound, but also ethically meaningful.

“Today’s evaluation metrics for explainable AI are often ad hoc and don’t necessarily tell us whether explanations are ethically relevant. Our goal is to create new metrics and algorithms that are grounded in both computer science and ethical theory,” says Professor Ira Assent, Department of Computer Science.

The project, “REMAX: Rigorous Evaluation Methods for AI Explainability”, brings together experts from Department of Computer Science and Centre for Science Studies at Department of Mathematics. By combining data-driven metrics with ethical criteria, the team will build a feedback loop where technical methods and ethical analysis continuously inform each other. 

“Ethics gives us principles for why explainability matters – for example, to ensure fairness or accountability. But these principles are often too abstract for developers to apply in practice. By bridging the two fields, we aim to provide tools that both developers and regulators can use,” says Associate Professor Rune Nyrup, Centre for Science Studies, Department of Mathematics.

A stronger foundation for responsible AI

The project will develop new evaluation frameworks based on the Argument Theory of Evidence, a philosophical method that has already influenced evidence-based policy. Applied to AI, it can help determine whether an explanation genuinely supports trust, fairness, and accountability.

“The outcome will be a stronger foundation for responsible AI – methods that enable developers, policymakers, and society at large to assess whether AI systems provide explanations that are both technically robust and ethically adequate,” says Ira Assent.

About the project

The project “REMAX: Rigorous Evaluation Methods for AI Explainability” is supported by a DKK 4.5 million Villum Synergy grant. It brings together:

  • Professor Ira Assent, Department of Computer Science
  • Associate Professor Rune Nyrup, Centre for Science Studies, Department of Mathematics

By building bridges between computer science and ethics, the team will develop practical, interdisciplinary solutions for one of AI’s most pressing challenges.

The Villum Synergy programme supports interdisciplinary collaborations at the intersection of computer science and other research fields. In 2025, the foundation awarded DKK 68 million to 13 projects. For more information, see announcement from the foundation.