Yale University
Department of Statistics
Seminar

Monday, November 11, 2002

A Robust Approach to Local Estimation with Applications
to Neural Networks and Machine Learning

L.K.Jones         UMass Lowell

          We present a minimax approach to local learning based on Taylor's theorem which is not heuristic and produces a family of local linear estimators indexed by the Taylor error coefficient and with optimal finite sample mean squared error bounds which are monotone in this index. The need for the (global) technique of cross validation to determine a bandwidth is replaced by the user's choice of a threshold on mean squared error.

        When information about the target function constrains the minimaximization, estimators evolve which are quite different than the best unbiased ones subject to the constraints. (In fact, in k dimensions, it can be shown that the new estimators are O(k) more efficient than these best unbiased ones for a large class of naturally occurring problems.)  Determining these minimax estimators in constrained cases requires the solution to problems in real algebraic geometry.

         The theory suggests a new feature extraction method which could lead to a new finite sample accuracy theory for neural networks and machine learning.


Seminar to be held in Room 107, 24 Hillhouse Avenue at 4:15 pm