Understanding from Machine Learning Models

Research output: Contribution to journalArticleAcademicpeer-review


Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this article, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding. 1Understanding from Minimal and Complex Models2Algorithms, Explanatory Questions, and Understanding3Black Boxes3.1Implementation black boxes3.2Levels of implementation black boxes4The Black Boxes of Deep Neural Networks4.1Deep neural network structure4.2Deep neural network modelling process4.3Levels of deep neural network black boxes5Understanding, Explanation, and Link Uncertainty5.1Deep neural networks and how-possibility explanations5.2Deep neural networks and link uncertainty5.3Differences in understanding; differences in link uncertainty6Conclusion
Original languageEnglish
JournalBritish Journal for the Philosophy of Science
Issue numberXX
Publication statusE-pub ahead of print - 1 Aug 2019

Bibliographical note



Dive into the research topics of 'Understanding from Machine Learning Models'. Together they form a unique fingerprint.

Cite this