Enhancing discrete choice models with representation learning

Brian Sifringer, Virginie Lurkin (Corresponding author), Alexandre Alahi

Research output: Contribution to journalArticleAcademicpeer-review

22 Citations (Scopus)

Abstract

In discrete choice modeling (DCM), model misspecifications may lead to limited predictability and biased parameter estimates. In this paper, we propose a new approach for estimating choice models in which we divide the systematic part of the utility specification into (i) a knowledge-driven part, and (ii) a data-driven one, which learns a new representation from available explanatory variables. Our formulation increases the predictive power of standard DCM without sacrificing their interpretability. We show the effectiveness of our formulation by augmenting the utility specification of the Multinomial Logit (MNL) and the Nested Logit (NL) models with a new non-linear representation arising from a Neural Network (NN), leading to new choice models referred to as the Learning Multinomial Logit (L-MNL) and Learning Nested Logit (L-NL) models. Using multiple publicly available datasets based on revealed and stated preferences, we show that our models outperform the traditional ones, both in terms of predictive performance and accuracy in parameter estimation. All source code of the models are shared to promote open science.

Original languageEnglish
Pages (from-to)236-261
Number of pages26
JournalTransportation Research Part B: Methodological
Volume140
DOIs
Publication statusPublished - Oct 2020

Keywords

  • Deep learning
  • Discrete choice models
  • Machine learning
  • Neural networks
  • Utility specification

Fingerprint

Dive into the research topics of 'Enhancing discrete choice models with representation learning'. Together they form a unique fingerprint.

Cite this