An outline for

Stochastic Processes: Theory and Applications

by

Joseph T. Chang

  1. Introduction. Here the major classes of stochastic processes are described in general terms and illustrated with graphs and pictures, and some of the applications are previewed. A major purpose is to build up motivation, communicating the interest and importance of the subject.
    1. What is a stochastic process? Preview of Markov chains, Markov random fields, hidden Markov models, point processes, martingales, Brownian motion, diffusions.
    2. Motivation: brief descriptions of some applications. Applications to be treated include: simulated annealing, stochastic approximation, Gibbs sampler, topics in modern genetics (e.g. genetic mapping, evolutionary trees), image processing, speech recognition, neural models, card shuffling, and finance.

  2. Markov chains. Markov chains illustrate many of the important ideas of stochastic processes in an elementary setting. Since the technical requirements are minimal, a relatively complete mathematical treatment is feasible. This classical subject is still very much alive, with exciting developments in both theory and applications coming at an accelerating pace in recent decades. This chapter includes discussions of some of these modern developments, such as speed of convergence to stationarity, generating random objects, and applications to simulation and statistics.
    1. What is a Markov chain? How to simulate one, Markov property.
    2. "It's all just matrix theory."
    3. Basic limit theorem: convergence of distribution to stationarity.
      1. Example of importance: generating a random object using Markov chains, Markov sampling, approximate counting.
      2. Proof of theorem. Along the way important concepts such as recurrence and coupling are presented in detail.
    4. Strong Law of Large Numbers for Markov chains.
    5. Branching processes.
    6. More on recurrence: electric networks.
    7. Time reversibility: birth and death chains, random walks on graphs, Metropolis algorithms.
    8. Speed of convergence to stationarity, card shuffling.
    9. Simulated annealing.
    10. Markov sampling (Gibbs sampling, Markov chain Monte Carlo) and Bayesian statistics. A (discrete) change point example.

  3. Markov chains in general state spaces. In many applications, the restriction to discrete spaces is unnatural and undesirable. This chapter focuses on extending the basic limit theorem from discrete to much more general state spaces. Extending the coupling idea to the more general setting is the interesting challenge. The mathematical level is a bit higher than in the previous chapter.
    1. Introduction, examples, and stationary distributions.
    2. Chains with an atom.
    3. Harris chains.

  4. Poisson processes. Poisson processes are a simple class of models for the occurrence of events in space and time. They also form a basic building block for many more complicated models in a variety of fields. This material is basic and important.
    1. Definitions and the classical elementary properties, such as superposition and relation to order statistics of uniform distribution.
    2. Generalizations: nonhomogeneous processes, higher dimensions.
    3. Applications to record values, simple queues, models of neurons, genetic models of mutations and crossovers.

  5. Continuous-time Markov chains. This will be a relatively short chapter; given the development of discrete-time Markov chains, continuous time tends to strike students at this level as a rather minor variation. Technically, continuous time makes certain analyses simpler; this chapter will discuss some applications of this type .
    1. How continuous-time chains work: rate matrix, forward and backward equations.
    2. Applications to queueing networks and neural network models.

  6. Markov random fields and hidden Markov models. A modern subject currently in vigorous development, this generalization of the Markov chain concept has been used successfully in an impressive variety of applications. This chapter gives a bit of theory, but emphasizes the applications.
    1. Idea of Markov random fields and hidden Markov models
    2. Hammersley-Clifford Theorem.
    3. Ising model and phase transitions.
    4. Simulation of Markov random fields by Markov sampling.
    5. Hidden Markov models: parameter estimation and statistical inference.
    6. Applications to speech recognition, image reconstruction, DNA sequence alignment, evolutionary trees, ion channels, expert systems.

  7. Renewal theory. This is a classical topic in pure and applied probability. This material interacts little with the other chapters (other than Markov chains), so it could be inserted in a variety of positions in a course (and the book). Renewal theory is particularly emphasized in certain applied areas such as operations research, and it can be a useful tool in the analysis of other processes. Students appreciate a simple explanation of the renewal theorem in terms of a jumping rabbit and they find the "inspection paradox" and related issues interesting.
    1. The basic issues: jumping rabbits and changing lightbulbs.
    2. Renewal theorem, renewal equations.
    3. Applications to overshoot, total lifetime, "inspection paradox," runs and patterns.
    4. Defective renewal equations and applications to re ected random walks and quick detection of a change in distribution.

  8. Martingales. This pillar of modern probability theory is interesting in its own right, as well as a useful tool in the analysis of other classes of stochastic processes. The presentation of the theory makes use of some basic properties of conditional expectations, without an elaborate measure-theoretic development.
    1. Definitions and examples.
    2. Stopping times and optional sampling theorems. Intuition: "Conservation of fairness" idea in playing games.
    3. Martingale convergence theorems.
    4. Applications.
      1. Absorption and ruin probabilities for Markov chains and random walks.
      2. Branching processes.
      3. Finance: option pricing and martingale representation in discrete time.
      4. Stochastic approximation:
        1. Proof of convergence of Robbins-Monro procedure
        2. explanation of relationship to clustering algorithms and backpropagation for neural networks.

  9. Brownian motion. This process is of fundamental importance, and many students will have heard of it already. A full study is mathematically demanding, and the whole subject is typically intimidating to beginners. This chapter introduces the mathematical fundamentals with plenty of conceptual and intuitive discussion.
    1. Explanation of the definition.
    2. Visualizing Brownian motion.
      1. Sampling a BM arbitrarily finely, normal random walks.
      2. Explanation of strange and scary pathologies: nondifferentiability and infinitely many zeroes.
    3. Stopping times, strong Markov property (intuitive explanation) and re ection principle.
    4. Conditional distributions.
    5. Construction of BM: connect-the-dots.
    6. Brownian bridge. Application to testing for uniformity.
    7. Two approaches to a boundary crossing problem: martingales and differential equations.
    8. Introduction to weak convergence, Donsker's theorem, Brownian approximations for random walks.

  10. Diffusions, stochastic calculus, stochastic differential equations. Diffusions generalize Brownian motion, allowing a much wider variety of phenomena to be modelled and studied. This material builds on the previous chapter. Again, the mathematical derivations stop short of full rigor, but attempt to provide insight and intuition.
    1. Idea of diffusions and stochastic differential equations. Infinitesimal drift and variance functions. Brownian motions as stochastic analog of linear functions, diffusions as stochastic solutions of differential equations.
    2. The method of differential equations.
    3. Kolmogorov backward and forward equations, stationary distributions.
    4. Boundary behavior: re ection and absorption.
    5. Ito integrals, Ito's formula.
    6. Applications: finance (option pricing and Black-Scholes formula), approximating random walks and Markov chains by diffusions, population genetics, stochastic control (e.g. optimal portfolio choice), Kalman filtering, testing for unit roots in time series.

  11. Likelihood ratios and extremes. Boundary crossing problems for stochastic processes arise frequently in applications. For example, many statistical procedures raise an alarm or reject a hypothesis when a certain stochastic process reaches a certain height. This chapter explores some interesting and powerful techniques for determining or approximating probabilities of this type of event.
    1. Poisson clumping heuristic.
      1. Explanation of the idea and a worked out example (the Ornstein-Uhlenbeck process)
      2. Applications: e.g. genetic mapping.
    2. Change of measure, likelihood ratios, Wald likelihood ratio identity.
      1. Derivation of inverse Gaussian distribution.
      2. Conditioning a random walk or Brownian motion having negative drift to reach a high level. Comment on "Neo-Darwinism implies punctuated equilibrium."
      3. Optimality of the sequential probability ratio test.
      4. Alternative derivation of Black-Scholes.
    3. Importance sampling: a simulation technique for rare events.
    4. Introduction to large deviations.

  12. Appendices.
    1. Topics from mathematics and probability. This is a presentation of background material that may be reviewed as needed.
      1. Indicator random variables. A review of a technique that is used throughout the book.
      2. Conditioning. Conditional probabilities and expectations are an important topic about which students typically have a shaky understanding.
      3. Monotone convergence, dominated convergence, and all that. A statement of some basic results from measure theory that are used in a few proofs. These are viewed largely as technicalities, although some examples involving stopping times and martingales show that they can occasionally become a central issue.
      4. Infinity and limsupery. Here some basic notions from elementary analysis and probability are reviewed. These include infimum and supremum, liminf and limsup. I want the students to be able to understand a definition like "taui = inf{n > 0 : Xn = i}," and to appreciate a distinction like "T is not only finite with probability 1, but also has finite expectation."
      5. Simulation. How to generate random variables from some standard distributions such as exponential and normal.
    2. Programs. A listing and description of computer programs used in the book. (I have mostly been using Mathematica for computation.) These will also be accessible electronically via the Internet.