Learning from samples using coherent lower previsions

Research output: ThesisPhd Thesis 4 Research NOT TU/e / Graduation NOT TU/e)

Abstract

This thesis's main subject is deriving, proposing, and studying predictive and parametric inference models that are based on the theory of coherent lower previsions. One important side subject also appears: obtaining and discussing extreme lower probabilities. In the chapter ‘Modeling uncertainty’, I give an introductory overview of the theory of coherent lower previsions ─ also called the theory of imprecise probabilities ─ and its underlying ideas. This theory allows us to give a more expressive ─ and a more cautious ─ description of uncertainty. This overview is original in the sense that ─ more than other introductions ─ it is based on the intuitive theory of coherent sets of desirable gambles. I show in the chapter ‘Extreme lower probabilities’ how to obtain the most extreme forms of uncertainty that can be modeled using lower probabilities. Every other state of uncertainty describable by lower probabilities can be formulated in terms of these extreme ones. The importance of the results in this area obtained and extensively discussed by me is currently mostly theoretical. The chapter ‘Inference models’ treats learning from samples from a finite, categorical space. My most basic assumption about the sampling process is that it is exchangeable, for which I give a novel definition in terms of desirable gambles. My investigation of the consequences of this assumption leads us to some important representation theorems: uncertainty about (in)finite sample sequences can be modeled entirely in terms of category counts (frequencies). I build on this to give an elucidating derivation from first principles for two popular inference models for categorical data ─ the predictive imprecise Dirichlet-multinomial model and the parametric imprecise Dirichlet model; I apply these models to game theory and learning Markov chains. In the last chapter, ‘Inference models for exponential families’, I enlarge the scope to exponential family sampling models; examples are normal sampling and Poisson sampling. I first thoroughly investigate exponential families and the related conjugate parametric and predictive previsions used in classical Bayesian inference models based on conjugate updating. These previsions serve as a basis for the new imprecise-probabilistic inference models I propose. Compared to the classical Bayesian approach, mine allows to be much more cautious when trying to express what we know about the sampling model; this caution is reflected in behavior (conclusions drawn, predictions made, decisions made) based on these models. Lastly, I show how the proposed inference models can be used for classification with the naive credal classifier.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Ghent University
Supervisors/Advisors
  • de Cooman, Gert, Promotor, External person
  • Aeyels, Dirk, Copromotor, External person
Award date23 Sep 2009
Place of PublicationGhent, Belgium
Print ISBNs9789085782490
Publication statusPublished - Jan 2009
Externally publishedYes

Keywords

  • imprecise Dirichlet model
  • exponential family
  • inference
  • desirable gambles, extreme points, coherence, exchangeability, imprecise probability, sample, updating, lower prevision, representation insensitivity, learning
  • desirable gambles
  • extreme points
  • coherence
  • exchangeability
  • imprecise probability
  • sample
  • updating, lower prevision, representation insensitivity
  • learning

Fingerprint

Dive into the research topics of 'Learning from samples using coherent lower previsions'. Together they form a unique fingerprint.

Cite this