Approximating data (Discussion Paper)

P.L. Davies

Research output: Contribution to journalArticleAcademicpeer-review

13 Citations (Scopus)
6 Downloads (Pure)


There are essentially two statistical paradigms, the Bayesian and frequentist. Despite their obvious differences the two approaches have certain points in common. In particular both are density (or likelihood) based and neither has a concept of approximation. By a concept of approximation we mean some formal admission of the fact that the statistical models are not true representations of the data. We argue that the relationship between the data and the model is a fundamental one which cannot be reduced to either diagnostics or model validation. We argue further that a concept of approximation must be formulated in a weak topology different from the strong topology of densities. For this reason there can be no density or likelihood based concept of approximation. The concept of approximation we suggest goes back to [Donoho, D. L. (1988). One-sided inference about functionals of a density. Annals of Statistics, 16, 1390–1420] and [Davies, P. L. (1995). Data features. Statistica Neerlandica, 49, 185–245] and requires ‘typical’ data sets simulated under the model ‘look like’ the real data set. This idea is developed using examples from nonparametric regression.
Original languageEnglish
Pages (from-to)191-211
JournalJournal of the Korean Statistical Society
Issue number3
Publication statusPublished - 2008


Dive into the research topics of 'Approximating data (Discussion Paper)'. Together they form a unique fingerprint.

Cite this