Explaining AI – but who needs an explanation?
![](/fileadmin/_processed_/5/b/csm_AI1_1f55873bbb.png)
Info about event
Time
Location
Online webinar
Price
Explainable AI strives to provide insight into advanced artificial intelligence models to support transparency, accountability and validation. We will look at state-of-the-art methods for machine learning models such as feature importance, visualization for image data, counterfactual examples and interpretable models.
We will see some examples of the kind of information that we can obtain, and what we may be able to learn.
One thing is to create transparency in relation to how an AI model works and on which data the model is trained. The next step is to communicate an explanation that the recipient can understand. Here it is important to do your homework and map out, who should have an explanation when and what technical level the explanation should have in order to make sense and create trust among the different stakeholders.
Program
09:00 Welcome by moderator, Lisa Lorentzen, The Alexandra Institute
09.05 What research is going on / what potentials are indicated by the research
by Ira Assent, Aarhus University
09:25 How do we go from research and technology to concrete initiatives with a focus on the human centric perspective by Sofie Naylor, The Alexandra Institute
09:45 Questions and round-off (possibly questions on an ongoing basis)
Please contact Lisa if you have questions for the webinar
Communications Specialist
+45 93 52 17 64
lisa.lorentzen@alexandra.dk