Data Science, Machine Learning, and Artificial Intelligence rely heavily on the use of modern mathematics. The key mathematical prerequisites to understanding these disciplines include
- probability theory
- linear algebra
- optimisation theory
The first day of the AI @ Oxford School is dedicated specifically to these mathematical prerequisites and can be done separately from the rest of the School.
More often than not, academics and practitioners are introduced to key mathematical ideas in a highly complex setting of advanced measure theory and Hilbert spaces.
Your course will introduce all key ideas, including the more advanced one, in an elementary, intuitive setting, enabling you to proceed to world-class, novel Data Science in as little time as possible.
The material is supported by a wealth of examples and tutorials.
- Dr. Paul Bilokon
The Christ Church (Ædes Christi) college of the University of Oxford.
|08:30 – 09:00||Registration and welcome|
|09:00 – 10:00||Lecture 1: Introduction to data science|
|10:00 – 10:30||Tutorial 1|
|10:30 – 11:00||Coffee break|
|11:00 – 12:00||Lecture 2: Probability theory|
|12:00 – 12:30||Tutorial 2|
|12:30 – 13:30||Lunch|
|13:30 – 14:30||Lecture 3: Linear algebra|
|14:30 – 15:00||Tutorial 3|
|15:00 – 15:30||Coffee break|
|15:30 – 16:30||Lecture 4: Optimisation theory|
|16:30 – 17:00||Tutorial 4|
|17:00 – 18:00||Lab|
|18:00 – 19:30||Tour of Christ Church|
|19:30 – 21:00||Dinner at the Dining Hall|
- Introduction to data science
- Data, information, knowledge, understanding, wisdom
- Analysis and synthesis
- Data analysis and data science
- The process of data science
- Artificial Intelligence and Machine Learning
- The language of Machine Learning
- Machine Learning and statistics
- Probability theory
- Random experiment and the sample space
- The classical interpretation of probability
- The frequentist interpretation of probability
- Bayesian interpretation of probability
- The axiomatic interpretation of probability
- Kolmogorov’s axiomatisation
- Conditional probability
- The Law of Total probability
- Bayes’s theorem
- Random variables
- Covariances and correlations
- Linear algebra
- Vectors and matrices
- Matrix multiplication
- Inverse matrices
- Independence, basis, and dimension
- The four fundamental spaces
- Orthogonal vectors
- Eigenvalues and eigenvectors
- Optimisation theory
- The optimisation problem
- Optimisation in one dimension
- Optimisation in multiple dimensions
- Grid search
- Gradient-based optimisation
- Vector calculus
- Quasi-Newton methods
- Gradient descent (stochastic, batch)
- Evolutionary optimisation
- Optimisation in practice
- Markov Decision Process (MDP)
- DeepMind: target networks
- Replay memory
- Exploration versus exploitation
- The market making example
- Murray R. Spiegel, John Schiller, R. Alu Srinivasan. Schaum’s Outlines: Probability and Statistics, second edition. McGraw-Hill, 2000.
- John B. Fraleigh, Raymond A. Beauregard. Linear Algebra, third edition. Addison Wesley, 1995.
- Gerard Cornuejols, Reha Tütüncü. Optimization Methods in Finance. Cambridge University Press, 2007.
- Philip E. Gill, Walter Murray, Margaret H. Wright. Practical Optimization. Emerald Group Publishing Limited, 1982.