Samenvatting
In machine learning the so-called curse of dimensionality, pertinent to many classification algorithms, denotes the drastic increase in computational complexity and classification error with data having a great number of dimensions. In this context, feature selection techniques try to reduce dimensionality finding a new more compact representation of instances selecting the most informative features and removing redundant, irrelevant, and/or noisy features. In this paper, we propose a filter-based feature selection method for working in the multiple-instance learning scenario called ReliefF-MI; it is based on the principles of the well-known ReliefF algorithm. Different extensions are designed and implemented and their performance checked in multiple instance learning. ReliefF-MI is applied as a pre-processing step that is completely independent from the multi-instance classifier learning process and therefore is more efficient and generic than wrapper approaches proposed in this area. Experimental results on five benchmark real-world data sets and 17 classification algorithms confirm the utility and efficiency of this method, both statistically and from the point of view of execution time.
Originele taal-2 | Engels |
---|---|
Pagina's (van-tot) | 210-218 |
Tijdschrift | Neurocomputing |
Volume | 75 |
Nummer van het tijdschrift | 1 |
DOI's | |
Status | Gepubliceerd - 2012 |