This paper considers the way mathematical and computational models are used in network neuroscience to deliver mechanistic explanations. Two case studies are considered: Recent work on klinotaxis by Caenorhabditis elegans, and a long-standing research effort on the network basis of schizophrenia in humans. These case studies illustrate the various ways in which network, simulation, and dynamical models contribute to the aim of representing and understanding network mechanisms in the brain, and thus, of delivering mechanistic explanations. After outlining this mechanistic construal of network neuroscience, two concerns are addressed. In response to the concern that functional network models are nonexplanatory, it is argued that functional network models are in fact explanatory mechanism sketches. In response to the concern that models which emphasize a network’s organization over its composition do not explain mechanistically, it is argued that this emphasis is both appropriate and consistent with the principles of mechanistic explanation. What emerges is an improved understanding of the ways in which mathematical and computational models are deployed in network neuroscience, as well as an improved conception of mechanistic explanation in general.
- mechanistic explanation