Feature selection for Bayesian network classifiers using the MDL-FS score

M.M. Drugan, M.A. Wiering

Research output: Contribution to journalArticleAcademicpeer-review

26 Citations (Scopus)


When constructing a Bayesian network classifier from data, the more or less redundant features included in a dataset may bias the classifier and as a consequence may result in a relatively poor classification accuracy. In this paper, we study the problem of selecting appropriate subsets of features for such classifiers. To this end, we propose a new definition of the concept of redundancy in noisy data. For comparing alternative classifiers, we use the Minimum Description Length for Feature Selection (MDL-FS) function that we introduced before. Our function differs from the well-known MDL function in that it captures a classifier’s conditional log-likelihood. We show that the MDL-FS function serves to identify redundancy at different levels and is able to eliminate redundant features from different types of classifier. We support our theoretical findings by comparing the feature-selection behaviours of the various functions in a practical setting. Our results indicate that the MDL-FS function is more suited to the task of feature selection than MDL as it often yields classifiers of equal or better performance with significantly fewer attributes.
Original languageEnglish
Pages (from-to)695-717
JournalInternational Journal of Approximate Reasoning
Issue number6
Publication statusPublished - Jul 2010


  • Feature subset selection
  • Minimum Description Length
  • Selective Bayesian classifiers
  • Tree augumented networks


Dive into the research topics of 'Feature selection for Bayesian network classifiers using the MDL-FS score'. Together they form a unique fingerprint.

Cite this