Bayesian methods, filtering, and Markov chain Monte Carlo

Apart from finance and medicine, Bayesian methods, especially stochastic filtering, are used in rocket science; Kalman filter was used in the US Apollo 11 programme that landed the first two people on the Moon

Overview

The Bayesian approach to machine learning is based on Bayes’s theorem and the idea that new information can be used, iteratively, to update prior beliefs.

Suppose that there is a latent, unobserved process, X, and an observable process Y, which, in a certain specific sense, depends on X. Both X and Y evolve over time. The filtering problem consists in using our observations of Y to estimate, in some optimal sense, the state of X. This is, in a nutshell, the filtering problem. The information about the current state of X can also be used to forecast both X and Y.

We may further suppose that X and/or Y depend on certain parameters, θ. Our task may be complicated by the need to estimate these parameters. We may either use Frequentist methods of their estimation, relying on maximum likelihood, or Bayesian methods, such as Markov chain Monte Carlo.

We shall consider both approaches and introduce Markov chain Monte Carlo packages such as WinBUGS and PyMC3, which greatly simplify the task of parameter estimation.

The filtering problem was first introduced in the context of radar communications, ballistics, and rocket science. The first stochastic filters were used in the Apollo 11 programme to help land on the Moon. Particle filters are nowadays used in computer vision and self-driving cars, and modern econometrics. In our course we shall also consider applications of stochastic filtering to finance.

Instructors

  • Dr. Paul Bilokon

Venue

Level39 of One Canada Square, Canary Wharf, London’s premier fintech hub.

Registration

Schedule

TimeDay 1Day 2
08:30 – 09:00Registration and welcome, a tour of Level39Registration and welcome
09:00 – 10:00Lecture 1: Bayesian and Frequentist estimationLecture 1: Maximum likelihood and the Frequentist approach to parameter estimation
10:00 – 10:30Tutorial 1: Bernoulli trials and repeated coin tossesTutorial 1: Calibrating stochastic volatility models with leverage and jumps
10:30 – 11:00Coffee breakCoffee break
11:00 – 12:00Lecture 2: The Kalman, Extended, and Unscented Kalman filterLecture 2: Markov chain Monte Carlo and the Bayesian approach to parameter estimation
12:00 – 12:30Tutorial 2: Filtering the ARMA(p, q) time series modelTutorial 2: WinBUGS and PyMC3: calibrating stochastic volatility models
12:30 – 13:30LunchLunch
13:30 – 14:30Lecture 3: Particle filtersLecture 3: Modelling term structure of yield curves
14:30 – 15:00Tutorial 3: Implementing a particle filterTutorial 3: Filtering the term structure using the Kalman filter
15:00 – 15:30Coffee breakCoffee break
15:30 – 16:30Lecture 4: Stochastic volatility modelsLecture 4: Modern applications and practical considerations
16:30 – 17:00Tutorial 4: Applying particle filters to stochastic volatility with leverage and jumpsTutorial 4: Productionising Kalman and particle filters
17:00 – 18:00LabLab

What to bring

Please refer to https://ai.thalesians.com/for-our-delegates/software-requirements/

Bibliography

Your course is designed to be self-contained. However, if you would like to read up on the content before, during, and/or after the course, we recommend the following books: