Using predictive differences to design experiments for model selection

J. Vanlier, C.A. Tiemann, J. Timmer, P.A.J. Hilbers, N.A.W. Riel, van

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

Abstract

Mathematical models are often used to formalize hypotheses on how a biochemical network operates. By selecting between competing models, different hypotheses can be compared. It is possible to estimate the evidence that data provides in support of one model over another. In a Bayesian framework, this is typically done by computing Bayes factors. When data is insufficiently informative to make a clear distinction, more data is required. Although the Bayesian model selection apparatus is suitable for selecting models, predicting distributions of Bayes factors is highly infeasible due to the computational complexity this involves. In this work, we propose searching for the experiment which optimally enables model selection by looking at predictive differences. We do so by simulating the posterior predictive distribution over new potential experiments. In many cases, distributions for single predictions typically show a large degree of overlap. The relations between the different prediction uncertainties depend on both the data and the model. Differences in these inter-prediction relations between competing models can be probed and used. In this work, we quantify differences in predictive distributions by means of the Jensen-Shannon divergence between predictive distributions belonging to competing models. The proposed method is evaluated by comparing its outcome to a predicted change in Bayes Factor upon simulating the experiment. Our simulations suggest that the Jensen-Shannon divergence between predictive densities is monotonically related to the increase in Bayes factor pointing toward the correct model. Therefore, it can be used to predict which experiments can effectively discriminate between models.
LanguageEnglish
Title of host publication5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany
Pages50-51
StatePublished - 2014
Event5th Conference on Systems Biology of Mammalian Cells (SBMC 2014), May 12-14, 2014, Berlin, Germany - Berlin, Germany
Duration: 12 May 201414 May 2014

Conference

Conference5th Conference on Systems Biology of Mammalian Cells (SBMC 2014), May 12-14, 2014, Berlin, Germany
Abbreviated titleSBMC 2014
CountryGermany
CityBerlin
Period12/05/1414/05/14

Fingerprint

Experiments
Computational complexity
Mathematical models

Cite this

Vanlier, J., Tiemann, C. A., Timmer, J., Hilbers, P. A. J., & Riel, van, N. A. W. (2014). Using predictive differences to design experiments for model selection. In 5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany (pp. 50-51)
Vanlier, J. ; Tiemann, C.A. ; Timmer, J. ; Hilbers, P.A.J. ; Riel, van, N.A.W./ Using predictive differences to design experiments for model selection. 5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany. 2014. pp. 50-51
@inproceedings{fc89f96e02df4d8a9ba669ead68dff4f,
title = "Using predictive differences to design experiments for model selection",
abstract = "Mathematical models are often used to formalize hypotheses on how a biochemical network operates. By selecting between competing models, different hypotheses can be compared. It is possible to estimate the evidence that data provides in support of one model over another. In a Bayesian framework, this is typically done by computing Bayes factors. When data is insufficiently informative to make a clear distinction, more data is required. Although the Bayesian model selection apparatus is suitable for selecting models, predicting distributions of Bayes factors is highly infeasible due to the computational complexity this involves. In this work, we propose searching for the experiment which optimally enables model selection by looking at predictive differences. We do so by simulating the posterior predictive distribution over new potential experiments. In many cases, distributions for single predictions typically show a large degree of overlap. The relations between the different prediction uncertainties depend on both the data and the model. Differences in these inter-prediction relations between competing models can be probed and used. In this work, we quantify differences in predictive distributions by means of the Jensen-Shannon divergence between predictive distributions belonging to competing models. The proposed method is evaluated by comparing its outcome to a predicted change in Bayes Factor upon simulating the experiment. Our simulations suggest that the Jensen-Shannon divergence between predictive densities is monotonically related to the increase in Bayes factor pointing toward the correct model. Therefore, it can be used to predict which experiments can effectively discriminate between models.",
author = "J. Vanlier and C.A. Tiemann and J. Timmer and P.A.J. Hilbers and {Riel, van}, N.A.W.",
year = "2014",
language = "English",
pages = "50--51",
booktitle = "5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany",

}

Vanlier, J, Tiemann, CA, Timmer, J, Hilbers, PAJ & Riel, van, NAW 2014, Using predictive differences to design experiments for model selection. in 5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany. pp. 50-51, 5th Conference on Systems Biology of Mammalian Cells (SBMC 2014), May 12-14, 2014, Berlin, Germany, Berlin, Germany, 12/05/14.

Using predictive differences to design experiments for model selection. / Vanlier, J.; Tiemann, C.A.; Timmer, J.; Hilbers, P.A.J.; Riel, van, N.A.W.

5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany. 2014. p. 50-51.

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

TY - GEN

T1 - Using predictive differences to design experiments for model selection

AU - Vanlier,J.

AU - Tiemann,C.A.

AU - Timmer,J.

AU - Hilbers,P.A.J.

AU - Riel, van,N.A.W.

PY - 2014

Y1 - 2014

N2 - Mathematical models are often used to formalize hypotheses on how a biochemical network operates. By selecting between competing models, different hypotheses can be compared. It is possible to estimate the evidence that data provides in support of one model over another. In a Bayesian framework, this is typically done by computing Bayes factors. When data is insufficiently informative to make a clear distinction, more data is required. Although the Bayesian model selection apparatus is suitable for selecting models, predicting distributions of Bayes factors is highly infeasible due to the computational complexity this involves. In this work, we propose searching for the experiment which optimally enables model selection by looking at predictive differences. We do so by simulating the posterior predictive distribution over new potential experiments. In many cases, distributions for single predictions typically show a large degree of overlap. The relations between the different prediction uncertainties depend on both the data and the model. Differences in these inter-prediction relations between competing models can be probed and used. In this work, we quantify differences in predictive distributions by means of the Jensen-Shannon divergence between predictive distributions belonging to competing models. The proposed method is evaluated by comparing its outcome to a predicted change in Bayes Factor upon simulating the experiment. Our simulations suggest that the Jensen-Shannon divergence between predictive densities is monotonically related to the increase in Bayes factor pointing toward the correct model. Therefore, it can be used to predict which experiments can effectively discriminate between models.

AB - Mathematical models are often used to formalize hypotheses on how a biochemical network operates. By selecting between competing models, different hypotheses can be compared. It is possible to estimate the evidence that data provides in support of one model over another. In a Bayesian framework, this is typically done by computing Bayes factors. When data is insufficiently informative to make a clear distinction, more data is required. Although the Bayesian model selection apparatus is suitable for selecting models, predicting distributions of Bayes factors is highly infeasible due to the computational complexity this involves. In this work, we propose searching for the experiment which optimally enables model selection by looking at predictive differences. We do so by simulating the posterior predictive distribution over new potential experiments. In many cases, distributions for single predictions typically show a large degree of overlap. The relations between the different prediction uncertainties depend on both the data and the model. Differences in these inter-prediction relations between competing models can be probed and used. In this work, we quantify differences in predictive distributions by means of the Jensen-Shannon divergence between predictive distributions belonging to competing models. The proposed method is evaluated by comparing its outcome to a predicted change in Bayes Factor upon simulating the experiment. Our simulations suggest that the Jensen-Shannon divergence between predictive densities is monotonically related to the increase in Bayes factor pointing toward the correct model. Therefore, it can be used to predict which experiments can effectively discriminate between models.

M3 - Conference contribution

SP - 50

EP - 51

BT - 5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany

ER -

Vanlier J, Tiemann CA, Timmer J, Hilbers PAJ, Riel, van NAW. Using predictive differences to design experiments for model selection. In 5th Conference on Systems Biology of Mammalian Cells (SBMC2014), 12-14 May 2014, Berlin, Germany. 2014. p. 50-51.