The dipping phenomenon

M. Loog, R.P.W. Duin

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

7 Citations (Scopus)
3 Downloads (Pure)

Abstract

One typically expects classifiers to demonstrate improved performance with increasing training set sizes or at least to obtain their best performance in case one has an infinite number of training samples at ones’s disposal. We demonstrate, however, that there are classification problems on which particular classifiers attain their optimum performance at a training set size which is finite. Whether or not this phenomenon, which we term dipping, can be observed depends on the choice of classifier in relation to the underlying class distributions. We give some simple examples, for a few classifiers, that illustrate how the dipping phenomenon can occur. Additionally, we speculate about what generally is needed for dipping to emerge. What is clear is that this kind of learning curve behavior does not emerge due to mere chance and that the pattern recognition practitioner ought to take note of it.
Original languageEnglish
Title of host publicationStructural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings)
EditorsG. Gimel'farb, E. Hancock, A. Imiya, A. Kuijper, M. Kudo, S. Omachi, T. Windeatt, K. Yamada
Place of PublicationBerlin
PublisherSpringer
Pages310-317
ISBN (Print)978-3-642-34165-6
DOIs
Publication statusPublished - 2012
Externally publishedYes
Eventconference; Joint IAPR International Workshop SSPR&SPR 2012; 2012-11-07; 2012-11-09 -
Duration: 7 Nov 20129 Nov 2012

Publication series

NameLecture Notes in Computer Science
Volume7626
ISSN (Print)0302-9743

Conference

Conferenceconference; Joint IAPR International Workshop SSPR&SPR 2012; 2012-11-07; 2012-11-09
Period7/11/129/11/12
OtherJoint IAPR International Workshop SSPR&SPR 2012

Fingerprint

performance
pattern recognition
learning

Cite this

Loog, M., & Duin, R. P. W. (2012). The dipping phenomenon. In G. Gimel'farb, E. Hancock, A. Imiya, A. Kuijper, M. Kudo, S. Omachi, T. Windeatt, ... K. Yamada (Eds.), Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings) (pp. 310-317). (Lecture Notes in Computer Science; Vol. 7626). Berlin: Springer. https://doi.org/10.1007/978-3-642-34166-3_34
Loog, M. ; Duin, R.P.W. / The dipping phenomenon. Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings). editor / G. Gimel'farb ; E. Hancock ; A. Imiya ; A. Kuijper ; M. Kudo ; S. Omachi ; T. Windeatt ; K. Yamada. Berlin : Springer, 2012. pp. 310-317 (Lecture Notes in Computer Science).
@inproceedings{0907c58a97284e50a143f1320a82e62e,
title = "The dipping phenomenon",
abstract = "One typically expects classifiers to demonstrate improved performance with increasing training set sizes or at least to obtain their best performance in case one has an infinite number of training samples at ones’s disposal. We demonstrate, however, that there are classification problems on which particular classifiers attain their optimum performance at a training set size which is finite. Whether or not this phenomenon, which we term dipping, can be observed depends on the choice of classifier in relation to the underlying class distributions. We give some simple examples, for a few classifiers, that illustrate how the dipping phenomenon can occur. Additionally, we speculate about what generally is needed for dipping to emerge. What is clear is that this kind of learning curve behavior does not emerge due to mere chance and that the pattern recognition practitioner ought to take note of it.",
author = "M. Loog and R.P.W. Duin",
year = "2012",
doi = "10.1007/978-3-642-34166-3_34",
language = "English",
isbn = "978-3-642-34165-6",
series = "Lecture Notes in Computer Science",
publisher = "Springer",
pages = "310--317",
editor = "G. Gimel'farb and E. Hancock and A. Imiya and A. Kuijper and M. Kudo and S. Omachi and T. Windeatt and K. Yamada",
booktitle = "Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings)",
address = "Germany",

}

Loog, M & Duin, RPW 2012, The dipping phenomenon. in G Gimel'farb, E Hancock, A Imiya, A Kuijper, M Kudo, S Omachi, T Windeatt & K Yamada (eds), Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings). Lecture Notes in Computer Science, vol. 7626, Springer, Berlin, pp. 310-317, conference; Joint IAPR International Workshop SSPR&SPR 2012; 2012-11-07; 2012-11-09, 7/11/12. https://doi.org/10.1007/978-3-642-34166-3_34

The dipping phenomenon. / Loog, M.; Duin, R.P.W.

Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings). ed. / G. Gimel'farb; E. Hancock; A. Imiya; A. Kuijper; M. Kudo; S. Omachi; T. Windeatt; K. Yamada. Berlin : Springer, 2012. p. 310-317 (Lecture Notes in Computer Science; Vol. 7626).

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - The dipping phenomenon

AU - Loog, M.

AU - Duin, R.P.W.

PY - 2012

Y1 - 2012

N2 - One typically expects classifiers to demonstrate improved performance with increasing training set sizes or at least to obtain their best performance in case one has an infinite number of training samples at ones’s disposal. We demonstrate, however, that there are classification problems on which particular classifiers attain their optimum performance at a training set size which is finite. Whether or not this phenomenon, which we term dipping, can be observed depends on the choice of classifier in relation to the underlying class distributions. We give some simple examples, for a few classifiers, that illustrate how the dipping phenomenon can occur. Additionally, we speculate about what generally is needed for dipping to emerge. What is clear is that this kind of learning curve behavior does not emerge due to mere chance and that the pattern recognition practitioner ought to take note of it.

AB - One typically expects classifiers to demonstrate improved performance with increasing training set sizes or at least to obtain their best performance in case one has an infinite number of training samples at ones’s disposal. We demonstrate, however, that there are classification problems on which particular classifiers attain their optimum performance at a training set size which is finite. Whether or not this phenomenon, which we term dipping, can be observed depends on the choice of classifier in relation to the underlying class distributions. We give some simple examples, for a few classifiers, that illustrate how the dipping phenomenon can occur. Additionally, we speculate about what generally is needed for dipping to emerge. What is clear is that this kind of learning curve behavior does not emerge due to mere chance and that the pattern recognition practitioner ought to take note of it.

U2 - 10.1007/978-3-642-34166-3_34

DO - 10.1007/978-3-642-34166-3_34

M3 - Conference contribution

SN - 978-3-642-34165-6

T3 - Lecture Notes in Computer Science

SP - 310

EP - 317

BT - Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings)

A2 - Gimel'farb, G.

A2 - Hancock, E.

A2 - Imiya, A.

A2 - Kuijper, A.

A2 - Kudo, M.

A2 - Omachi, S.

A2 - Windeatt, T.

A2 - Yamada, K.

PB - Springer

CY - Berlin

ER -

Loog M, Duin RPW. The dipping phenomenon. In Gimel'farb G, Hancock E, Imiya A, Kuijper A, Kudo M, Omachi S, Windeatt T, Yamada K, editors, Structural, Syntactic, and Statistical Pattern Recognition (Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7-9, 2012. Proceedings). Berlin: Springer. 2012. p. 310-317. (Lecture Notes in Computer Science). https://doi.org/10.1007/978-3-642-34166-3_34