Electives

During terms 2 and 3 of their first year students are required to undertake three elective courses from a selection of courses provided by Oxford and Imperial.

The list of suggested courses are listed below, but you may be able to choose other courses running at Oxford Maths Institute, Oxford Statistics, Imperial or the Taught Course Centre with agreement from your supervisor.

List of elective courses (Term 2: Jan-March 2024)

Oxford Mathematical Institute

Expand All

Course Overview: 

This course will serve as an introduction to optimal transportation theory, its application in the analysis of PDE, and its connections to the macroscopic description of interacting particle systems.

Learning Outcomes: 

Getting familar with the Monge-Kantorovich problem and transport distances. Derivation of macroscopic models via the mean-field limit and their analysis based on contractivity of transport distances. Dynamic Interpretation and Geodesic convexity. A brief introduction to gradient flows and examples.

Course Synopsis: 
  1. Interacting Particle Systems & PDE (2 hours)
    • Granular Flow Models and McKean-Vlasov Equations.
    • Nonlinear Diffusion and Aggregation-Diffusion Equations.
  2. Optimal Transportation: The metric side (4 hours)
    • Functional Analysis tools: weak convergence of measures. Prokhorov’s Theorem. Direct Method of Calculus of Variations. (1 hour)
    • Monge Problem. Kantorovich Duality. (1.5 hours)
    • Transport distances between measures: properties. The real line. Probabilistic Interpretation: couplings.(1.5 hours)
  3. Mean Field Limit & Couplings (4 hours)
    • Dobrushin approach: derivation of the Aggregation Equation. (1.5 hour)
    • Sznitmann Coupling Method for the McKean-Vlasov equation. (1.5 hour)
    • Boltzmann Equation for Maxwellian molecules: Tanaka Theorem. (1 hour)
  4. Gradient Flows: Aggregation-Diffusion Equations (6 hours)
    • Brenier’s Theorem and Dynamic Interpretation of optimal tranport. Otto’s calculus. (2 hours)
    • McCann’s Displacement Convexity: Internal, Interaction and Confinement Energies. (2 hours)
  5. Gradient Flow approach: Minimizing movements for the (McKean)-Vlasov equation. Properties of the variational scheme. Connection to mean-field limits. (2 hours)
Reading List: 
  1. F. Golse, On the Dynamics of Large Particle Systems in the Mean Field Limit, Lecture Notes in Applied Mathematics and Mechanics 3. Springer, 2016.
  2. L. C. Evans, Weak convergence methods for nonlinear partial differential equations. CBMS Regional Conference Series in Mathematics 74, AMS, 1990.
  3. F. Santambrogio, Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling, Progress in Nonlinear Differential Equations and Their Applications, Birkhauser 2015.
  4. C. Villani, Topics in Optimal Transportation, AMS Graduate Studies in Mathematics, 2003

Please note that e-book versions of many books in the reading lists can be found on SOLO

Further Reading: 
  1. L. Ambrosio, G. Savare, Handbook of Differential Equations: Evolutionary Equations, Volume 3-1, 2007.
  2. C. Villani, Optimal Transport: Old and New, Springer 2009

More details: Course: C4.9 Optimal Transport & Partial Differential Equations (2023-24) | Mathematical Institute (ox.ac.uk)

 

General Prerequisites: Basic linear algebra (such as eigenvalues and eigenvectors of real matrices), multivariate real analysis (such as norms, inner products, multivariate linear and quadratic functions, basis) and multivariable calculus (such as Taylor expansions, multivariate differentiation, gradients).

Course Overview: The solution of optimal decision-making and engineering design problems in which the objective and constraints are nonlinear functions of potentially (very) many variables is required on an everyday basis in the commercial and academic worlds. A closely-related subject is the solution of nonlinear systems of equations, also referred to as least-squares or data fitting problems that occur in almost every instance where observations or measurements are available for modelling a continuous process or phenomenon, such as in weather forecasting. The mathematical analysis of such optimization problems and of classical and modern methods for their solution are fundamental for understanding existing software and for developing new techniques for practical optimization problems at hand.

More details: Course: C6.2 Continuous Optimisation (2023-24) | Mathematical Institute (ox.ac.uk)

Learning Outcomes:

Students will learn how some of the various different ensembles of random matrices are defined. They will encounter some examples of the applications these have in Data Science, modelling Complex Quantum Systems, Mathematical Finance, Network Models, Numerical Linear Algebra, and Population Dynamics. They will learn how to analyse eigenvalue statistics, and see connections with other areas of mathematics and physics, including combinatorics, number theory, and statistical mechanics

Course Synopsis: 

Introduction to matrix ensembles, including Wigner and Wishart random matrices, and the Gaussian and Circular Ensembles. Overview of connections with Data Science, Complex Quantum Systems, Mathematical Finance, Network Models, Numerical Linear Algebra, and Population Dynamics (1 Lecture)

Statement and proof of Wigner’s Semicircle Law; statement of Girko’s Circular Law; applications to Population Dynamics (May’s model). (3 lectures)

Statement and proof of the Marchenko-Pastur Law using the Stieltjes and R-transforms; applications to Data Science and Mathematical Finance. (3 lectures)

Derivation of the Joint Eigenvalue Probability Density for the Gaussian and Circular Ensembles;
method of orthogonal polynomials; applications to eigenvalue statistics in the large-matric limit;
behaviour in the bulk and at the edge of the spectrum; universality; applications to Numerical Linear
Algebra and Complex Quantum Systems (5 lectures)

Dyson Brownian Motion (2 lectures)

Connections to other problems in mathematics, including the longest increasing subsequence
problem; distribution of zeros of the Riemann zeta-function; topological genus expansions. (2
lectures)

Reading List: 
  1. ML Mehta, Random Matrices (Elsevier, Pure and Applied Mathematics Series)
  2. GW Anderson, A Guionnet, O Zeitouni, An Introduction to Random Matrices (Cambridge Studies in Advanced Mathematics)
  3. ES Meckes, The Random Matrix Theory of the Classical Compact Groups (Cambridge University Press)
  4. G. Akemann, J. Baik & P. Di Francesco, The Oxford Handbook of Random Matrix Theory (Oxford University Press)
  5. G. Livan, M. Novaes & P. Vivo, Introduction to Random Matrices (Springer Briefs in Mathematical Physics)

Please note that e-book versions of many books in the reading lists can be found on SOLO

Further Reading: 
  1. T. Tao, Topics in Random Matrix Theory (AMS Graduate Studies in Mathematics)

More details: Course: C7.7 Random Matrix Theory (2023-24) | Mathematical Institute (ox.ac.uk)

General Prerequisites: Integration and measure theory, martingales in discrete and continuous time, stochastic calculus. Functional analysis is useful but not essential.

Course Overview: Stochastic analysis and partial differential equations are intricately connected. This is exemplified by the celebrated deep connections between Brownian motion and the classical heat equation, but this is only a very special case of a general phenomenon. We explore some of these connections, illustrating the benefits to both analysis and probability.

Course Synopsis: Feller processes and semigroups. Resolvents and generators. Hille-Yosida Theorem (without proof). Diffusions and elliptic operators, convergence and approximation. Stochastic differential equations and martingale problems. Duality. Speed and scale for one dimensional diffusions. Green's functions as occupation densities. The Dirichlet and Poisson problems. Feynman-Kac formula.

More details: Course: C8.2 Stochastic Analysis and PDEs (2023-24) | Mathematical Institute (ox.ac.uk)

General Prerequisites: Part B Graph Theory and Part A Probability. C8.3 Combinatorics is not as essential prerequisite for this course, though it is a natural companion for it.

Course Overview: Probabilistic combinatorics is a very active field of mathematics, with connections to other areas such as computer science and statistical physics. Probabilistic methods are essential for the study of random discrete structures and for the analysis of algorithms, but they can also provide a powerful and beautiful approach for answering deterministic questions. The aim of this course is to introduce some fundamental probabilistic tools and present a few applications.

Course Synopsis: First-moment method, with applications to Ramsey numbers, and to graphs of high girth and high chromatic number. Second-moment method, threshold functions for random graphs. Lovász Local Lemma, with applications to two-colourings of hypergraphs, and to Ramsey numbers. Chernoff bounds, concentration of measure, Janson's inequality. Branching processes and the phase transition in random graphs. Clique and chromatic numbers of random graphs.

More details: Course: C8.4 Probabilistic Combinatorics (2023-24) | Mathematical Institute (ox.ac.uk)

Course Overview:

The convergence theory of probability distributions on path space is an essential part of modern probability and stochastic analysis allowing the development of diffusion approximations and the study of scaling limits in many settings. The theory of large deviation is an important aspect of limit theory in probability as it enables a description of the probabilities of rare events. The emphasis of the course will be on the development of the necessary tools for proving various limit results and the analysis of large deviations which have universal value. These topics are fundamental within probability and stochastic analysis and have extensive applications in current research in the study of random systems, statistical mechanics, functional analysis, PDEs, quantum mechanics, quantitative finance and other applications.

Learning Outcomes:

The students will understand the notions of convergence of probability laws, and the tools for proving associated limit theorems. They will have developed the basic techniques for the establishing large deviation principles and be able to analyze some fundamental examples.

Course Synopsis:

1) (2 lectures) We will recall metric spaces, and introduce Polish spaces, and probability measures on metric spaces. Weak convergence of probability measures and tightness, Prohorov's theorem on tightness of probability measures, Skorohod's representation theorem for weak convergence.
2) (2 lectures) The criterion of pre-compactness for distributions on continuous path spaces, martingales and compactness.
3) (4 hours) Skorohod's topology and metric on the space D[0,∞)[0,∞) of right-continuous paths with left limits, basic properties such as completeness and separability, weak convergence and pre-compacness of distributions on D[0,∞)[0,∞). D. Aldous' pre-compactness criterion via stopping times.
4) (4 lectures) First examples - Cramér's theorem for finite dimensional distributions, Sanov's theorem. Schilder's theorem for the large deviation principle for Brownian motion in small time, law of the iterated logarithm for Brownian motion.
5) (4 lectures) General tools in large deviations. Rate functions, good rate functions, large deviation principles, weak large deviation principles and exponential tightness. Varadhan's contraction principle, functional limit theorems.

Department of Statistics, University of Oxford

Expand All

This course runs Oct-Dec, but it may be possible to follow the course via pre-recorded videos and prepare an assessment.

Aims and Objectives: Many data come in the form of networks, for example friendship data and protein-protein interaction data. As the data usually cannot be modelled using simple independence assumptions, their statistical analysis provides many challenges. The course will give an introduction to the main problems and the main statistical techniques used in this field. The techniques are applicable to a wide range of complex problems. The statistical analysis benefits from insights which stem from probabilistic modelling, and the course will combine both aspects.

Synopsis:

Exploratory analysis of networks. The need for network summaries. Degree distribution, clustering coefficient, shortest path length. Motifs.

Probabilistic models: Bernoulli random graphs, geometric random graphs, preferential attachment models, small world networks, inhomogeneous random graphs, exponential random graphs.

Small subgraphs: Stein’s method for normal and Poisson approximation. Branching process approximations, threshold behaviour, shortest path between two vertices.

Statistical analysis of networks: Sampling from networks. Parameter estimation for models. Inference from networks: vertex characteristics and missing edges. Nonparametric graph comparison: subgraph counts, subsampling schemes, MCMC methods. A brief look at community detection.

More details:

Reading: R. Durrett, Random Graph Dynamics, Cambridge University Press,2007

E.D Kolaczyk and G. Csádi, Statistical Analysis of Network Data with R, Springer, 2014

M. Newman, Networks: An Introduction. Oxford University Press, 2010

Recommended Prerequisites: The course requires a good level of mathematical maturity. Students are expected to be familiar with core concepts in statistics (regression models, bias-variance tradeoff, Bayesian inference), probability (multivariate distributions, conditioning) and linear algebra (matrix-vector operations, eigenvalues and eigenvectors). Previous exposure to machine learning (empirical risk minimisation, dimensionality reduction, overfitting, regularisation) is highly recommended. Students would also benefit from being familiar with the material covered in the following courses offered in the Statistics department: SB2.1 (formerly SB2a) Foundations of Statistical Inference and in SB2.2 (formerly SB2b) Statistical Machine Learning.

Aims and Objectives: Machine learning is widely used across many scientific and engineering disciplines, to construct methods to find interesting patterns and to predict accurately in large datasets. This course introduces several widely used data machine learning techniques and describes their underpinning statistical principles and properties. The course studies both unsupervised and supervised learning and several advanced topics are covered in detail, including some state-of-the-art machine learning techniques. The course will also cover computational considerations of machine learning algorithms and how they can scale to large datasets.

 

More details: SC4 Advanced Topics in Statistical Machine Learning

Synopsis: Empirical risk minimisation. Loss functions. Generalization. Over- and under-fitting. Bias and variance. Regularisation.

Support vector machines.

Kernel methods and reproducing kernel Hilbert spaces. Representer theorem. Representation of probabilities in RKHS.

Deep learning: Neural networks. Computation graphs. Automatic differentiation. Stochastic gradient descent.

Probabilistic and Bayesian machine learning: Fundamentals of the Bayesian approach. EM algorithm. Variational inference. Latent variable models.

Deep generative models. Variational auto-encoders.

Gaussian processes. Bayesian optimisation.

Software: Knowledge of Python is not required for this course, but some examples may be done in Python. Students interested in learning Python are referred to the following free University IT online course, which should ideally be taken before the beginning of this course: https://skills.it.ox.ac.uk/whats-on#/course/LY046

Reading: C. Bishop, Pattern Recognition and Machine Learning, Springer,2007

K. Murphy, Machine Learning: a Probabilistic Perspective, MIT Press, 2012

Further Reading: T. Hastie, R. Tibshirani, J Friedman, Elements of Statistical Learning, Springer, 2009

Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011, http://scikit-learn.org/stable/tutorial/

Imperial College London

Expand All

Brief Description

Machine learning techniques such as deep learning have recently achieved remarkable results in a very wide variety of applications such as image recognition, self-driving vehicles, partial differential equation solvers, trading strategies. However, how and why the recent (deep learning) models work is often still not fully understood. In this course we will begin with a general introduction into machine learning and continue to deep learning. We will focus on better some observed phenomena in deep learning aiming to gain insight into the impact of the optimization algorithms and network architecture through mathematical tools. This module will be 100% coursework.

 

Brief Description
This is a course on the theory and applications of random dynamical systems and ergodic theory.
Random dynamical systems are deterministic dynamical systems driven by a random input. The goal will be to present a solid introduction to the subject and then to touch upon several more advanced developments in this field.

Learning Outcomes

On successful completion of this module, you will be able to:
- describe the fundamental concepts of random dynamical systems;
- summarize the ergodic theory of random dynamical systems;
- select and critically appraise relevant research papers and chapters of research monographs;

- combine the ideas contained in such papers to provide a written overview of the current state of affairs concerning a particular aspect of random dynamical systems theory;
- thoughtfully engage orally in discussions related to random dynamical systems.

Module Content
Introductory lectures include foundational material on:
- Invariant measures and ergodic theory
- Random (pullback) attractors
- Lyapunov exponents
- Random circle homeomorphisms
Further material is at a more advanced level, touching upon current frontline research. Students select material from research level articles or book chapters.

The module covers both the theoretical underpinnings of convex optimisation and its applications to important problems in mathematical finance.

A brief outline of the course reads as follows:

• Fundamental properties of convex sets and convex functions

• The basics of convex optimisation with special emphasis on duality theory

• Markowitz portfolio theory and the CAPM model

• Expected utility maximisation and no arbitrage

• Convexity in continuous time hedging

The increase in computer power over the last decades has given rise to prices being quoted and stocks being traded at an ever-increasing pace. Since humans are not able to place orders at this speed, algorithms have replaced classical traders to optimise portfolios and investments. In this module, we will study specificities of this market, and in particular, we shall develop the mathematical tools required to develop such algorithms in this high-frequency framework. The module will start with a short review of stochastic optimal control, which forms the mathematical background. We shall then move on to study optimal execution, namely how and when to place buy/sell orders in this market, both assuming continuous trading and in the context of limit and market orders. The last part of the module will be dedicated to the concept of market making and statistical arbitrage in high-frequency settings.

 

Brief Description

This module will introduce a variety of computational approaches for solving partial differential equations, focusing mostly on finite difference methods, but also touching on finite volume and spectral methods. Students will gain experience implementing the methods and writing/modifying short programs in Matlab or another programming language of their choice. Applications will be drawn from problems arising in areas such as Mathematical Biology and Fluid Dynamics.
Learning Outcomes
On successful completion of this module, you will be able to:
- appreciate the physical and mathematical differences between different types of PDES;
- design suitable finite difference methods to solve each type of PDE;
- outline a theoretical approach to testing the stability of a given algorithm;

- determine the order of convergence of a given algorithm;
- demonstrate familiarity with the implementation and rationale of multigrid methods;
- develop finite-difference based software for use on research level problems;
- communicate your research findings as a poster, in a form suitable for presentation at a scientific conference.
Module Content
The module will cover the following topics:
1) Introduction to Finite Differences
2) Classificaton of PDEs
3) Explicit and Implicit methods for Parabolic PDEs
4) Iterative Methods for Elliptic PDEs. Jacobi, Gauss-Seidel, Overrelaxation
5) Multigrid Methods
6) Hyperbolic PDEs. Nonlinear Advection/Diffusion systems. Waves and PMLs as well as various advanced practical topics from Fluid Dynamics, which will depend on the final project.

Brief Description
The main aim of this module is to understand geodesics and curvature and the relationship between them. Using these ideas we will show how local geometric conditions can lead to global topological constraints.
Learning Outcomes
On successful completion of this module, you will be able to:
- Understand the relevant structures required to make sense of differential topological notions, such as derivatives of smooth functions, and geometric notions, such as lengths and angles, on an abstract manifold.
- Define the Lie derivative and covariant derivative of a tensor field.
- Define geodesics and understand their length minimising properties.
- Define and interpret various measures of the curvature of a Riemannian manifold.
- Understand the effect of curvature on neighbouring geodesics.
- Prove the celebrated classical theorems of Bonnet--Myers and Cartan--Hadamard.

Module Content
An indicative list of topics is:
Topological and smooth manifolds, tangent and cotangent spaces, vector bundles, tensor bundles, Lie bracket, Lie derivative, Riemannian metrics, affine connections, the Levi-Civita connection, parallel transport, geodesics, Riemannian distance, the exponential map, completeness and the Hopf--Rinow Theorem, Riemann and Ricci curvature tensors, scalar curvature, sectional curvatures, submanifolds, the second  fundamental form and the Gauss equation, Jacobi fields and the second variation of geodesics, the Bonnet--Myers and Cartan--Hadamard Theorems.

Prerequisites:

Geometry of Curves and Surfaces and Manifolds

Finite element methods form a flexible class of techniques for numerical solution of PDEs that are both accurate and efficient. The finite element method is a core mathematical technique underpinning much of the development of simulation science. Applications are as diverse as the structural mechanics of buildings, the weather forecast, and pricing financial instruments. Finite element methods have a powerful mathematical abstraction based on the language of function spaces, inner products, norms and operators.

Learning Outcomes

On successful completion of this module, you will be able to:

  • appreciate the core mathematical principles of the finite element method;
  • employ the finite element method to formulate and analyse numerical solutions to linear elliptic PDEs;
  • implement the finite element method on a computer;
  • compare the application of various software engineering techniques to numerical mathematics;
  • generalize the concept of a directional derivative;
  • appraise and evaluate techniques for solving nonlinear PDEs using the finite element method.

Module Content

This module aims to develop a deep understanding of the finite element method by spanning both its analysis and implementation. In the analysis part of the module, students will employ the mathematical abstractions of the finite element method to analyse the existence, stability and accuracy of numerical solutions to PDEs. At the same time, in the implementation part of the module students will combine these abstractions with modern software engineering tools to create and understand a computer implementation of the finite element method.

This module is composed of the following sections:

  1. Basic concepts: weak formulation of boundary value problems, Ritz-Galerkin approximation, error estimates, piecewise polynomial spaces, local estimates;
  2. Efficient construction of finite element spaces in one dimension: 1D quadrature, global assembly of mass matrix and Laplace matrix;
  3. Construction of a finite element space: Ciarlet’s finite element, various element types, finite element interpolants;
  4. Construction of local bases for finite elements: efficient local assembly;
  5. Sobolev Spaces: generalised derivatives, Sobolev norms and spaces, Sobolev’s inequality;
  6.  Numerical quadrature on simplices: employing the pullback to integrate on a reference element;
  7. Variational formulation of elliptic boundary value problems: Riesz representation theorem, symmetric and nonsymmetric variational problems, Lax-Milgram theorem, finite element approximation estimates;
  8. Computational meshes: meshes as graphs of topological entities, discrete function spaces on meshes, local and global numbering;
  9. Global assembly for Poisson equation: implementation of boundary conditions, general approach for nonlinear elliptic PDEs;
  10. Variational problems: Poisson’s equation, variational approximation of Poisson’s equation, elliptic regularity estimates, general second-order elliptic operators and their variational approximation;
  11. Residual form and the Gâteaux derivative;
  12. Newton solvers and convergence criteria.

• Variational formulation of elliptic boundary value problems: Riesz representation theorem, symmetric and nonsymmetric variational problems, Lax-Milgram theorem, finite element approximation estimates.

• Computational meshes: meshes as graphs of topological entities. Discrete function spaces on meshes, local and global numbering.

• Global assembly for Poisson equation, implementation of boundary conditions. General approach for nonlinear elliptic PDEs.

• Variational problems: Poisson’s equation, variational approximation of Poisson’s equation, elliptic regularity estimates, general second-order elliptic operators and their variational approximation.

• Residual form, the Gâteaux derivative and techniques for nonlinear problems.

The course is assessed 50% by examination and 50% by coursework (implementation exercise in Python).

The module introduces the latest advances in machine learning. We start with reinforcement learning and demonstrate how it can be combined with neural networks in deep reinforcement learning, which has achieved spectacular results in recent years, such as outplaying the human champion at Go. We also demonstrate how advanced neural networks and tree-based methods, such as decision trees and random forests, can be used for forecasting financial time series and generating alpha. We explain how these advances are related to Bayesian methods, such as particle filtering and Markov chain Monte Carlo. We apply these methods to set up a profitable algorithmic trading venture in cryptocurrencies using Python and kdb+/q (a top technology for electronic trading) along the way.

Brief Description

This module provides an hands-on introduction to the methods of modern data science. Through interactive lectures, the student will be introduced to data visualisation and analysis as well as the fundamentals of machine learning.

Learning Outcomes

On successful completion of this module, you will be able to:

  • Visualise and explore data using computational tools;
  • Appreciate the fundamental concepts and challenges of learning from data;
  • Analyse some commonly used learning methods;
  • Compare learning methods and determine suitability for a given problem;
  • Describe the principles and differences between supervised and unsupervised learning;
  • Clearly and succinctly communicate the results of a data analysis or learning application;
  • Appraise and evaluate new algorithms and computational methods presented in scientific and mathematical journals;
  • Design and implement newly-developed algorithms and methods.

Module Content

The module is composed of the following sections: 

  1. Introduction to computational tools for data analysis and visualisation;
  2. Introduction to exploratory data analysis;
  3. Mathematical challenges in learning from data: optimisation;
  4. Methods in Machine Learning: supervised and unsupervised; neural networks and deep learning; graph-based data learning;
  5. Machine learning in practice: application of commonly used methods to data science problems. Methods include: regressions, k-nearest neighbours, random forests, support vector machines, neural networks, principal component analysis, k-means, spectral clustering, manifold learning, network statistics, community detection;
  6. Current research questions in data analysis and machine learning and associated numerical methods.

Term 2

To deal with valuation, hedging and risk management of financial options, we briefly introduce stochastic differential equations using a Riemann-Stiltjes approach to stochastic integration. We introduce no-arbitrage theory in continuous time based on replicating portfolios, self-financing conditions and Ito's formula. We derive prices as risk neutral expectations. We derive the Black Scholes model and introduce volatility smile models. We illustrate valuation of different options and introduce risk measures like Value at Risk and Expected Shortfall, motivating them with the financial crises.

Learning Outcomes

 

On successful completion of this module you will be able to

  • work comfortably with stochastic differential equations commonly encountered in finance
  • explain what is meant by no-arbitrage markets and why no-arbitrage is important operationally;
  • connect no-arbitrage by replication to the existence of a risk neutral measure;
  • price and hedge several types of financial options with several SDE models;
  • calculate risk measures such as Value at Risk and Expected Shortfall;
  • write code to price options according to SDE models covered in the module.
  • independently appraise and evaluate SDE models for financial products.
  • adapt a range of numerical methods and apply them in a coherent manner to unfamiliar and open problems in finance.

Module Content

1.Recap of key tools from probability theory

2.Brownian motion

3.Ito and Stratonovich stochastic integrals

4.Ito and Stratonovich stochastic differential equations (SDEs)

5.No-arbitrage through replication

6.No arbitrage though risk neutral measure

7.Derivation of the Black Scholes formula

8.Introduction of a few volatility smile models

9.Pricing of several types of options

10.Introduction to crises and risk measures

11.The Barings collapse and the introduction of value at risk (VaR)

12.Problems of VaR and an alternative: expected shortfall (ES)

13.Numerical examples and problems with risk measures, including software code.

Building AI systems capable of extracting information from complex streams of data and
reliably perform inference is an important challenge in many areas of data science. For example summarising patients’ health records to evaluate the efficacy of a treatment, or extracting information from the stock market to design successful trading strategies.

Dealing with streamed data involves numerous challenges including: irregular sampling and missing data; multimodality (i.e. data from different sources); multiple time-scales; randomness and noise; high dimensionality/number of channels.

Classical time series analysis techniques, such as Fourier or Wavelets methods, might provide efficient bottom-up summaries of univariate time series, but because they treat each channel independently, they are not well designed to capture non-linear interactions between different channels of a multivariate stream.

This is where rough path theory, a modern mathematical framework describing complex evolving systems, is incredibly useful. The signature, a centrepiece of the theory, provides a top down description of a stream capturing crucial information over an interval of time, such as the order of events happening across different channels, and filtering out superfluous information. Thanks to its numerous analytic and algebraic properties, the signature is an ideal “feature map” for streamed data that can be used to enhance traditional machine learning models for time series prediction, classification and generation.

The theoretical footprint of rough path theory has been substantial in the study of random phenomena, notably through its presence in Martin Hairer’s Fields medal-winning work on regularity structures, which develops a rigorous framework to solve certain ill-posed stochastic PDEs.

Over the last decade, rough path theory has seen a rapidly increasing interest from the
data science and machine learning communities due to its potency and relevance to de-
scribe/predict/summarise/generate complex streams of data. This course will provide an in-depth survey of the field.

Prerequisites in stochastic calculus, the theory and numerical analysis of ODEs and SDEs are assumed.

Brief Description

The goal of this module is to complement the Core module on Simulation Methods to investigate other techniques that are widely spread among the financial industry. We shall investigate two popular techniques, namely PDE methods and Fourier methods.

For each approach, we will start with a theoretical framework, explaining how an option pricing problem can be turned into a dynamic programming problem, a PDE or a Fourier integration. We shall then focus on the numerical methods to solve these problems. Practical implementations on real models/data will be emphasised.

Brief Description

The goal of the module is to develop thorough understanding of how form, information is aggregated, and trades occur in financial markets. The main market types will be described as well as traders’ main motives for why they trade. Market manipulation and high-frequency trading strategies have received a lot of attention in the press recently, so the module will illustrate them and examine recent developments in regulations that aim to limit them. Liquidity is a key theme in market microstructure, and the students will learn how to measure it and to recognise the recent increase in liquidity fragmentation and hidden, “dark” liquidity. The Flash Crash of 6 May 2010 will be analysed as a case study of sudden loss of liquidity.

Brief Description

Many problems in mathematical finance (and in other areas) are essentially optimisation problems subject to random perturbations, where some controls play the role of a performance criterion. The goal of this module is to bring the main concepts and techniques from dynamic stochastic optimisation and stochastic control theory to the realm of quantitative finance. It will therefore naturally start with a theoretical part focusing on required elements of stochastic analysis, and with a motivation through several examples of control problems in Finance. We will then turn to the classical PDE approach of dynamic programming, including controlled diffusion processes, dynamic programming principle, the Hamilton-Jacobi-Bellman equation and its verification theorem. We will finally see how to derive an solve dynamic programming equations for various financial problems such as the Merton portfolio problem, pricing under transaction costs, super-replication with portfolio constraints, and target reachability problems.

The module offers a bespoke introduction to stochastic calculus required to cover the classical theoretical results of nonlinear filtering. The first part of the module will equip the students with the necessary knowledge (e.g., Ito Calculus, Stochastic Integration by Parts, Girsanov’s  theorem)  and skills (solving linear stochastic differential equation, analysing continuous martingales, etc) to handle a variety of applications. The focus will be on the use of stochastic calculus to the theory and numerical solution of nonlinear filtering.

Learning Outcomes

On successful completion of this module, you will be able to:

  • Understand the notion on Brownian motion and able to show that a stochastic process is a Brownian motion
  • Prove that a process is a martingale via Novikov's condition. c. Solve linear SDEs
  • Check whetheran SDE is well-posed
  • Understand the mathematical framework of nonlinear filtering
  • Deduce the filtering equations
  • Deduce the evolution equation of the mean and variance of the one-dimensional Kalman-Bucy filter
  • Show that the innovation process is a Brownian motion
  • Apply stochastic integration by parts.  

Module Content

An indicative list of topics is:

  1. Martingales on Continuous Time (Doob Meyer decomposition, L_p bounds, Brownian motion, exponential martingales, semi-martingales, local martingales, Novikov’s condition) 
  2. Stochastic Calculus (Ito’s isometry, chain rule, integration by parts)
  3. Stochastic Differential Equations (well posedness, linear SDEs, the Ornstein-Uhlenbeck process, Girsanov's Theorem)
  4. Stochastic Filtering (definition, mathematical model for the signal process and the observation process)
  5. The Filtering Equations (well-posedness, the innovation process, the Kalman-Bucy filter)

Prerequisites: Ordinary differential equations, partial differential equations, real analysis, probability theory.