Feature-level domain adaptation

Wouter M. Kouw, Laurens J.P. Van Der Maaten, Jesse H. Krijthe, Marco Loog

Research output: Contribution to journalArticleAcademicpeer-review

25 Citations (Scopus)

Abstract

Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature-level domain adaptation (flda), that models the dependence between the two domains by means of a feature-level transfer model that is trained to describe the transfer from source to target domain. Subsequently, we train a domain-adapted classifier by minimizing the expected loss under the resulting transfer model. For linear classifiers and a large family of loss functions and transfer models, this expected loss can be computed or approximated analytically, and minimized efficiently. Our empirical evaluation of flda focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classiffier to adapt to differences in the marginal probability of features in the source and the target domain. Our experiments on several real-world problems show that flda performs on par with state-of-the-art domainadaptation techniques.

Original languageEnglish
Pages (from-to)5943-5974
Number of pages32
JournalJournal of Machine Learning Research
Volume17
Issue number171
Publication statusPublished - 1 Sep 2016
Externally publishedYes

Keywords

  • Covariate shift
  • Domain adaptation
  • Risk minimization
  • Transfer learning

Fingerprint

Dive into the research topics of 'Feature-level domain adaptation'. Together they form a unique fingerprint.

Cite this