The Bayesian approach to machine learning is based on Bayes’s theorem and the idea that new information can be used, iteratively, to update prior beliefs.
Suppose that there is a latent, unobserved process, X, and an observable process Y, which, in a certain specific sense, depends on X. Both X and Y evolve over time. The filtering problem consists in using our observations of Y to estimate, in some optimal sense, the state of X. This is, in a nutshell, the filtering problem. The information about the current state of X can also be used to forecast both X and Y.
We may further suppose that X and/or Y depend on certain parameters, θ. Our task may be complicated by the need to estimate these parameters. We may either use Frequentist methods of their estimation, relying on maximum likelihood, or Bayesian methods, such as Markov chain Monte Carlo.
We shall consider both approaches and introduce Markov chain Monte Carlo packages such as WinBUGS and PyMC3, which greatly simplify the task of parameter estimation.
The filtering problem was first introduced in the context of radar communications, ballistics, and rocket science. The first stochastic filters were used in the Apollo 11 programme to help land on the Moon. Particle filters are nowadays used in computer vision and self-driving cars, and modern econometrics. In our course we shall also consider applications of stochastic filtering to finance.
- Dr. Paul Bilokon
Level39 of One Canada Square, Canary Wharf, London’s premier fintech hub.
|Time||Day 1||Day 2|
|08:30 – 09:00||Registration and welcome, a tour of Level39||Registration and welcome|
|09:00 – 10:00||Lecture 1: Bayesian and Frequentist estimation||Lecture 1: Maximum likelihood and the Frequentist approach to parameter estimation|
|10:00 – 10:30||Tutorial 1: Bernoulli trials and repeated coin tosses||Tutorial 1: Calibrating stochastic volatility models with leverage and jumps|
|10:30 – 11:00||Coffee break||Coffee break|
|11:00 – 12:00||Lecture 2: The Kalman, Extended, and Unscented Kalman filter||Lecture 2: Markov chain Monte Carlo and the Bayesian approach to parameter estimation|
|12:00 – 12:30||Tutorial 2: Filtering the ARMA(p, q) time series model||Tutorial 2: WinBUGS and PyMC3: calibrating stochastic volatility models|
|12:30 – 13:30||Lunch||Lunch|
|13:30 – 14:30||Lecture 3: Particle filters||Lecture 3: Modelling term structure of yield curves|
|14:30 – 15:00||Tutorial 3: Implementing a particle filter||Tutorial 3: Filtering the term structure using the Kalman filter|
|15:00 – 15:30||Coffee break||Coffee break|
|15:30 – 16:30||Lecture 4: Stochastic volatility models||Lecture 4: Modern applications and practical considerations|
|16:30 – 17:00||Tutorial 4: Applying particle filters to stochastic volatility with leverage and jumps||Tutorial 4: Productionising Kalman and particle filters|
|17:00 – 18:00||Lab||Lab|
What to bring
Your course is designed to be self-contained. However, if you would like to read up on the content before, during, and/or after the course, we recommend the following books:
- Dan Simon. Optimal State Estimation: Kalman, H-infinity, and Nonlinear Approaches. Wiley, 2006.
- Alan Bain, Dan Crisan. Fundamentals of Stochastic Filtering. Springer, 2009.
- Paul Bilokon, James Gwinnutt, Daniel Jones. Stochastic Filtering Methods in Electronic Trading. In Novel Methods in Computational Finance, ed. Matthias Ehrhardt, Michael Guenther, and E. Jan W. ter Maten. Springer, 2017.