Understanding from Machine Learning Models

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

Samenvatting

Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this article, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding. 1Understanding from Minimal and Complex Models2Algorithms, Explanatory Questions, and Understanding3Black Boxes3.1Implementation black boxes3.2Levels of implementation black boxes4The Black Boxes of Deep Neural Networks4.1Deep neural network structure4.2Deep neural network modelling process4.3Levels of deep neural network black boxes5Understanding, Explanation, and Link Uncertainty5.1Deep neural networks and how-possibility explanations5.2Deep neural networks and link uncertainty5.3Differences in understanding; differences in link uncertainty6Conclusion
Originele taal-2Engels
TijdschriftBritish Journal for the Philosophy of Science
VolumeXX
Nummer van het tijdschriftXX
DOI's
StatusE-publicatie vóór gedrukte publicatie - 1 aug 2019

Bibliografische nota

axz035

Vingerafdruk Duik in de onderzoeksthema's van 'Understanding from Machine Learning Models'. Samen vormen ze een unieke vingerafdruk.

Citeer dit