Minimax bounds for active learning

R.M. Castro, R. Nowak

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    13 Citations (Scopus)

    Abstract

    This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore we show that the learning rates derived are tight for "boundary fragment" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below.
    Original languageEnglish
    Title of host publicationProceedings of the 20th Annual Conference on Learning Theory (COLT 2007), 13-15 June 2007, San Diego CA, USA
    EditorsN.H. Bshouty, C. Gentile
    Place of PublicationBerlin
    PublisherSpringer
    Pages5-19
    ISBN (Print)978-3-540-72925-9
    DOIs
    Publication statusPublished - 2007

    Publication series

    NameLecture Notes in Computer Science
    Volume4539
    ISSN (Print)0302-9743

    Fingerprint

    Dive into the research topics of 'Minimax bounds for active learning'. Together they form a unique fingerprint.

    Cite this