Name Last modified Size Description

convex.pdf 1997-06-11 13:22 247K PDreport.pdf 1997-08-19 13:21 434K thoughts.pdf 2000-05-23 10:12 77K KimPollard90AS.pdf 2003-06-04 22:01 2.5M Tusnady3.pdf 2003-09-29 16:08 136K pollardBDI.1july02.pdf 2003-09-29 20:48 185K Pollard81AS.pdf 2003-12-07 21:43 598K NolanPollard88AP.pdf 2003-12-07 21:48 716K NolanPollard87AS.pdf 2003-12-07 21:53 1.5M xHEADER 2004-05-20 15:38 3.4K PDreport-11apr97.pdf 2005-07-12 12:23 521K PakesPollard89Econom..> 2005-07-23 13:39 2.6M chang-pollard.pdf 2006-10-19 16:10 430K Pollard91ET.pdf 2006-10-19 16:13 488K december1982.pdf 2006-10-19 16:29 759K Pollard82ITT.pdf 2006-10-19 16:35 859K august 1986.pdf 2006-10-19 16:41 893K PollardRadchenko.pdf 2006-10-19 19:42 232K Pollard89StatSci.pdf 2006-10-19 20:21 4.6M Concentration.14jan0..> 2007-01-14 17:41 293K LNMS5512.pdf 2007-05-30 07:50 152K Pollard82JAMS.pdf 2008-04-11 22:04 1.2M PollardTweedie-Rtheo..> 2008-04-11 22:19 539K Pollard1980Metrika.pdf 2008-04-11 22:32 1.4M PollardTweedie75JLMS..> 2008-04-13 09:45 666K chromatic.14april200..> 2008-04-14 20:47 410K Pollard1991Duke.pdf 2008-04-22 19:19 1.6M chromatic.30june08.pdf 2008-06-30 14:27 214K Pollard-publications..> 2009-09-01 15:53 83K Dou-Pollard-Zhou.pdf 2010-01-24 10:19 286K

- Dou-Pollard-Zhou.pdf =
**Functional regression for general exponential families**(with Wei Dou and Harry Zhou) - The paper derives a minimax lower bound for rates of convergence for an infinite-dimensional parameter in an exponential family model. An estimator that achieves the optimal rate is constructed by maximum likelihood on finite-dimensional approximations with parameter dimension that grows with sample size.
- Concentration.14jan07.pdf =
**A note on Talagrand's convex hull concentration inequality** *The paper reexamines an argument by Talagrand that leads to a remarkable exponential tail bound for the concentration of probability near a set. The main novelty is the replacement of a mysterious calculus inequality by an application of Jensen's inequality.*- Tusnady3.pdf =
**Tusnady's inequality revisited**(with Andrew Carter) -
*Tusnady's inequality is the key ingredient in the KMT/Hungarian coupling of the empirical distribution function with a Brownian Bridge. We present an elementary proof of a result that sharpens the Tusnady inequality, modulo constants. Our method uses the beta integral representation of Binomial tails, simple Taylor expansion, and some novel bounds for the ratios of normal tail probabilities.* - thoughts.pdf =
**Some thoughts on Le Cam's statistical decision theory** *The paper contains some musings about the abstractions introduced by Lucien Le Cam into the asymptotic theory of statistical inference and decision theory. A short, selfcontained proof of a key result (existence of randomizations via convergence in distribution of likelihood ratios), and an outline of a proof of a local asymptotic minimax theorem, are presented as an illustration of how Le Cam's approach leads to conceptual simplifications of asymptotic theory.*- pollardBDI.1july02.pdf =
**Maximal inequalities via bracketing with adaptive truncation** *Abstract. The paper provides a recursive interpretation for the technique known as bracketing with adaptive truncation. By way of illustration, a simple bound is derived for the expected value of the supremum of an empirical process, thereby leading to a simpler derivation of a functional central limit limit due to Ossiander. The recursive method is also abstracted into a framework that consists of only a small number of assumptions about processes and functionals indexed by sets of functions. In particular, the details of the underlying probability model are condensed into a single inequality involving finite sets of functions. A functional central limit theorem of Doukhan, Massart and Rio, for empirical processes defined by absolutely regular sequences, motivates the generalization.*- convex.pdf =
**Asymptotics for minimisers of convex processes**(with Nils Lid Hjort) *By means of two simple convexity arguments we are able to develop a general method for proving consistency and asymptotic normality of estimators that are defined by minimisation of convex criterion functions. This method is then applied to a fair range of different statistical estimation problems, including Cox regression, logistic and Poisson regression, least absolute deviation regression outside model conditions, and pseudo-likelihood estimation for Markov chains. Our paper has two aims. The first is to exposit the method itself, which in many cases, under reasonable regularity conditions, leads to new proofs that are simpler than the traditional proofs. Our second aim is to exploit the method to its limits for logistic regression and Cox regression, where we seek asymptotic results under as weak regularity conditions as possible. For Cox regression in particular we are able to weaken previously published regularity conditions substantially.*