Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy

Onderzoeksoutput: Hoofdstuk in Boek/Rapport/CongresprocedureConferentiebijdrageAcademicpeer review

Samenvatting

This paper provides an analysis of the way in which two foundational principles of medical ethics-the trusted doctor and patient autonomy-can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context.

Originele taal-2Engels
TitelXAILA 2019 EXplainable AI in Law 2019: Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) co-located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019)
RedacteurenGrzegorz J. Nalepa
UitgeverijCEUR-WS.org
Aantal pagina's12
StatusGepubliceerd - 2019
Evenement2nd EXplainable AI in Law Workshop, XAILA 2019 - Madrid, Spanje
Duur: 11 dec. 2019 → …

Publicatie series

NaamCEUR Workshop Proceedings
UitgeverijCEUR-WS.org
Volume2681
ISSN van geprinte versie1613-0073

Congres

Congres2nd EXplainable AI in Law Workshop, XAILA 2019
Land/RegioSpanje
StadMadrid
Periode11/12/19 → …

Vingerafdruk

Duik in de onderzoeksthema's van 'Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy'. Samen vormen ze een unieke vingerafdruk.

Citeer dit