Abstract
One typically expects classifiers to demonstrate improved performance with increasing training set sizes or at least to obtain their best performance in case one has an infinite number of training samples at ones’s disposal. We demonstrate, however, that there are classification problems on which particular classifiers attain their optimum performance at a training set size which is finite. Whether or not this phenomenon, which we term dipping, can be observed depends on the choice of classifier in relation to the underlying class distributions. We give some simple examples, for a few classifiers, that illustrate how the dipping phenomenon can occur. Additionally, we speculate about what generally is needed for dipping to emerge. What is clear is that this kind of learning curve behavior does not emerge due to mere chance and that the pattern recognition practitioner ought to take note of it.
Original language | English |
---|---|
Title of host publication | Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings) |
Editors | G. Gimel'farb, E. Hancock, A. Imiya, A. Kuijper, M. Kudo, S. Omachi, T. Windeatt, K. Yamada |
Place of Publication | Berlin |
Publisher | Springer |
Pages | 310-317 |
ISBN (Print) | 978-3-642-34165-6 |
DOIs | |
Publication status | Published - 2012 |
Externally published | Yes |
Event | Joint IAPR International Workshop SSPR+SPR, Hiroshima, Japan, November 7-9, 2012 - Hiroshima, Japan Duration: 7 Nov 2012 → 9 Nov 2012 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Volume | 7626 |
ISSN (Print) | 0302-9743 |
Conference
Conference | Joint IAPR International Workshop SSPR+SPR, Hiroshima, Japan, November 7-9, 2012 |
---|---|
Country/Territory | Japan |
City | Hiroshima |
Period | 7/11/12 → 9/11/12 |
Other | Joint IAPR International Workshop SSPR+SPR 2012 |