Norms of Explainable AI (EAISI Startup)

Project: First tier

Projectdetails

Omschrijving

AI is often used to assist humans in making decisions, be they medical decisions, decisions regarding transportation routes (e.g. changing car routes based on latest traffic information), and decisions resulting from AI systems that engage in data analytics. It is important that these decisions are transparent, and the GDPR makes strides in requiring such transparency with the right to explanation. However, there are various functions or purposes explanations can have. Explanations can promote trust, enable understanding, or persuade others to do something. In order to ensure that the explanations of AI decisions serve socially responsible functions, ethical and epistemic reflection is necessary. This project will take an interdisciplinary approach in developing a taxonomy of the various functions explanations of AI systems can have in medical decisions, mobility, and highly technical systems. It will identify ethical and epistemic standards for these explanations. For example, it is not just important that an explanation of an AI decision promotes trust, but that it does so in an ethically responsible way. These standards can be used to improve DSS and recommendation systems so that they are better able to meet the needs of humans that use these systems.
StatusActief
Effectieve start/einddatum1/01/2131/12/24

Samenwerkende partners

Vingerafdruk

Verken de onderzoeksgebieden die bij dit project aan de orde zijn gekomen. Deze labels worden gegenereerd op basis van de onderliggende prijzen/beurzen. Samen vormen ze een unieke vingerafdruk.