David Pollard's webpage
Professor emeritus of Statistics and Data Science,
Yale University
http://www.stat.yale.edu/~pollard/
david.pollard@yale.edu
I am now retired, enjoying a life of reading, writing, and learning.
I am no longer taking on new PhD students,
although I am willing to engage in informal discussions with anyone
already at Yale.
Please do not write to me asking to join my research group.
I have no such group.
books & manuscripts
⑤
Probability tools, tricks, and miracles,
An unpublished book, aimed at new researchers who want to
understand some of the methods that have been developed over the
past half century. Originally it was intended as an update of my
first two books, focusing mainly of empirical processes, but over the years
it evolved into a more general
discussion. Some topics:
- Calculus + convexity: a very useful and effective
combination. Simple tricks to replace brute force. Chapter 2.
- Precise behavior of the N(0,1) tail probabilities, with an
application to Stein's method for norml approximation. §3.4
- Tail bounds and relationship between Binomial,
Poisson-Binomial, and Hypergeometric. A beatiful result of
Hoeffding proved using a method invented by Chebyshev nearly 200
years ago. §§3.7, 3.8; Chapter 4.
- Quick and effective methods for controlling (maximal
inequalities) stochastic processes by means of Orlicz norms.
Chapter 5.
- For the multivariate normal: tail bounds, comparison
inequalities, concentration. Discussion and comparison of three
difference versions the path methods. Chapter 6.
- Subgaussian. Standard inequalities (Hoeffding for sums of
independent variables or martingales) that every probabilist
should know. Chapter 7.
- Bennett inequalities for independent summands or
martingales, with an application to the Kim-Vu concentration
results for random polynomials. §§8.3,8.4.
- The justification for the typical remark ``without loss of
generality we may assume the index set is at worst countably
infinite" when studying stochastic processes indexed by general
metric spaces. Chapter 9.
- Chaining: Why does it work? Classical covering/packing
approaches. How to choose useful sequences of constants.
Oscillation bounds constructed as an exercise in approximation by
finite sets of fixed size. Chapter 10.
- Methods of Fernique and Talagrand for construction of
chaining frameworks. Relationship between majorizing measures,
weighted chaining frameworks, and admissible sequences of
partitions. Insights provided by particularly instructive
variation on a classical example. Chapter 11.
- Weighted sequences of partitions for gaussian lower bounds.
Constructions as algorithms. What is going on? Chapter 12.
- Symmetrization in the broad sense. How R. A. Fisher's ideas about experimental
design 90 years ago anticipated the use of random ±1 variables (aka `Rademachers').
Maximal inqualities for sums of independent stochastic processes.
Chapter 13.
-
VC sets. What's going on? Downshifting on matrices filled with 0's and 1's
as a way to prove the famous VC bound. Shatter dimension of subsets of {0,1}n
and Haussler's interpretation via orientation of edges. (Very clever stuff.)
Chapter 14.
- Surround dimension (aka fat shattering) for subsets of euclidean spaces
and the consequences fo packing numbers. (Beautiful ideas about counting lattice points
by Vershynin et al.) Chapter 15.