Pollard at Beijing 2010

A very short course on Le Cam theory

by

David Pollard (Yale University)

as part of the

Statistics and Information Technology Summer School
Beijing, July 2010

Lucien Le Cam's 1986 book, Asymptotic Methods in Statistical Decision Theory, contains over 700 pages. I suspect that most would-be readers of the book get lost or discouraged somewhere in Chapter 1.

After many years of effort I like to think I understand a tiny fraction of Le Cam's wonderful ideas. Clearly I have no hope of discussing everything one might want to learn about Le Cam's approach to decision theory. Instead I will try to explain some of the reasons that Le Cam's insights have become so important for modern theoretical statistics. I might be able to convince you to explore the gentler treatment in the smaller 2000 book, Asymptotics in Statistics: Some Basic Concepts by Le Cam and Yang, or even to attack some of the recent literature where Le Cam's concept of a distance between statistical models has proven so fruitful.

I plan to make the morning lectures as accessible as I can, by postponing discussion of some of the technical details to informal afternoon sessions for anyone who wants to dig more deeply into the theory. Detailed notes will appear at some point: see the directory listing at the end of this web page.

Currently I intend to cover the following topics. (The list might change a little as I struggle with the notes.)

  • When are the inference problems posed by two seemingly unrelated models almost equivalent? What does closeness of models in Le Cam's sense mean? How is this closeness related to sufficiency?
  • What does "randomization" mean? Why did Le Cam change the meanings of so many standard decision theoretic concepts? What's wrong with sample spaces, anyway?
  • Is it true that the likelihood ratio is a sufficient statistic? The canonical representation for models with finite parameter sets. Le Cam distance versus some metrics for weak convergence of canonical measures.
  • Hellinger distance, Hellinger differentiability (DQM), and local asymptotic normality.
  • Some form of the modern way to understand efficiency, perhaps the convolution theorem or local asymptotic minimax theorem.
  • Infinite-dimensional models, probably with some discussion of the asymptotic equivalence between density estimation and the white noise model. (There is an intimidating amount of recent literature that has grown from Nussbaum's 1996 Annals of Statistics paper. I'll try to explain some of the old and new ideas.)

Date Times Main Topic
Tuesday
13 July
10:00—12:00 Lecture 1:
Motivating examples and clever ideas.
Wednesday
14 July
4:00—5:00 informal discussion session
Thursday
15 July
10:00—12:00 Lecture 2:
Randomization and the Le Cam distance.
Thursday
15 July
3:00—4:00 informal discussion session
Monday
19 July
10:00—12:00 Lecture 3:
Hellinger versus classical regularity conditions.
Monday
19 July
3:00—4:00 informal discussion session
Wednesday
21 July
10:00—12:00 Lecture 4:
Efficiency, sunk and rescued.
Wednesday
21 July
3:00—4:00 informal discussion session
Friday
23 July
10:00—12:00 Lecture 5:
Infinite dimensions.
Monday
19 July
3:00—4:00 informal discussion session

References

  • Nussbaum (1996)
    Asymptotic equivalence of density estimation and Gaussian white noise,
    Ann. Statist. 24 2399-2430.
    [EUCLID]
  • Carter and Pollard (2004)
    Tusnady's inequality revisited,
    Ann. Statist. 32, 2731-2741.
    [EUCLID]
  • van der Vaart (2002)
    The statistical work of Lucien Le Cam Ann. Statist 30, 631-682.
    [EUCLID]
  • Brown, Carter, Low, and Zhang (2004)
    Equivalence theory for density estimation, Poisson processes and Gaussian white noise with drift,
    Ann. Statist. 32, 2074-2097.
    [EUCLID]
  • Huibin Zhou (2004)
    Minimax Estimation with Thresholding and Asymptotic Equivalence for Gaussian Variance Regression
    Ph.D. Dissertation, 2004, Cornell University.
  • Notes and handouts

    Utah2007.pdf
    = talk given at JSM, using Hellinger differentiability to discuss the paper
    A Sufficiency Paradox: An Insufficient Statistic Preserving the Fisher Information
    by Kagan and Shepp, The American Statistician, February 2005, Vol. 59, No. 1, pp 54-56
    App.Tusnady.pdf
    = coupling of Binomial with normal
    Chap1_12July2010.pdf
    = draft of the material covered by Lecture 1
    UGMTP_chap1+2.pdf
    = first two chapters of A User's Guide to Measure Theoretic Probability.
    Chapter 1 introduces linear functional notation for expectations.
    Chapter 2 contains a quick tour of some measure theory.
    UGMTP_chap3[part].pdf
    = part of UGMTP chapter 3. Discusses Radon-Nikodym (density of one measure with respect to another measure), total variation (and L1 distance), Hellinger distance, and KL distance.
    Contiguity.pdf
    = draft of a Chapter from Asymptopia. Unedited.
    Pollard97LeCamFest.pdf
    = Another look at differentiability in quadratic mean, a short note contributed to the Festschrift for Lucien Le Cam.
    thoughts.pdf
    = unpublished note regarding Le Cam distance.
    [ICO]NameLast modifiedSizeDescription

    [PARENTDIR]Parent Directory   -  
    [   ]thoughts.pdf 2010-06-25 15:05 80K 
    [IMG]le_cam.gif 2010-06-21 23:27 281K 
    [   ]Utah2007.pdf 2007-07-29 15:29 198K 
    [   ]UGMTP_chap3[part].pdf 2010-07-01 16:11 140K 
    [   ]UGMTP_chap1+2.pdf 2005-01-06 11:42 455K 
    [   ]Pollard97LeCamFest.pdf 2010-06-25 14:55 104K 
    [   ]ICSS_talk.pdf 2010-07-01 13:56 469K 
    [   ]Contiguity.pdf 2010-06-25 14:51 190K 
    [   ]Chap1_12July2010.pdf 2010-07-12 18:02 205K 
    [   ]App.Tusnady.pdf 2010-07-18 21:13 374K