Improving parameter learning of Bayesian nets from incomplete data

Giorgio Corani, C.P. de Campos

Research output: Working paperAcademic

18 Downloads (Pure)

Abstract

This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to overcome these issues: a maximum entropy approach and a Bayesian model averaging approach. Both ideas can be easily applied on top of EM, while the entropy idea can be also implemented in a more sophisticated way, through a dedicated non-linear solver. A vast set of experiments shows that these ideas produce significantly better estimates and inferences than the traditional and widely used maximum (penalized) log-likelihood and maximum a posteriori estimates. In particular, if EM is adopted as optimization engine, the model averaging approach is the best performing one; its performance is matched by the entropy approach when implemented using the non-linear solver. The results suggest that the applicability of these ideas is immediate (they are easy to implement and to integrate in currently available inference engines) and that they constitute a better way to learn Bayesian network parameters.
Original languageEnglish
Number of pages13
Publication statusPublished - 2011
Externally publishedYes

Bibliographical note

CoRR ArXiv eprints 1110.3239

Fingerprint

Dive into the research topics of 'Improving parameter learning of Bayesian nets from incomplete data'. Together they form a unique fingerprint.

Cite this