Electives

During terms 2 and 3 of their first year students are required to undertake three elective courses from a selection of courses provided by Oxford and Imperial.

The list of suggested courses are listed below, but you may be able to choose other courses running at Oxford Maths Institute, Oxford Statistics, Imperial or the Taught Course Centre with agreement from your supervisor.

List of elective courses (Term 2: Jan-March 2023)

Oxford Mathematical Institute

Expand All

Course Overview: 

This course will serve as an introduction to optimal transportation theory, its application in the analysis of PDE, and its connections to the macroscopic description of interacting particle systems.

Learning Outcomes: 

Getting familar with the Monge-Kantorovich problem and transport distances. Derivation of macroscopic models via the mean-field limit and their analysis based on contractivity of transport distances. Dynamic Interpretation and Geodesic convexity. A brief introduction to gradient flows and examples.

Course Synopsis: 
  1. Interacting Particle Systems & PDE (2 hours)
    • Granular Flow Models and McKean-Vlasov Equations.
    • Nonlinear Diffusion and Aggregation-Diffusion Equations.
  2. Optimal Transportation: The metric side (4 hours)
    • Functional Analysis tools: weak convergence of measures. Prokhorov’s Theorem. Direct Method of Calculus of Variations. (1 hour)
    • Monge Problem. Kantorovich Duality. (1.5 hours)
    • Transport distances between measures: properties. The real line. Probabilistic Interpretation: couplings.(1.5 hours)
  3. Mean Field Limit & Couplings (4 hours)
    • Dobrushin approach: derivation of the Aggregation Equation. (1.5 hour)
    • Sznitmann Coupling Method for the McKean-Vlasov equation. (1.5 hour)
    • Boltzmann Equation for Maxwellian molecules: Tanaka Theorem. (1 hour)
  4. Gradient Flows: Aggregation-Diffusion Equations (6 hours)
    • Brenier’s Theorem and Dynamic Interpretation of optimal tranport. Otto’s calculus. (2 hours)
    • McCann’s Displacement Convexity: Internal, Interaction and Confinement Energies. (2 hours)
  5. Gradient Flow approach: Minimizing movements for the (McKean)-Vlasov equation. Properties of the variational scheme. Connection to mean-field limits. (2 hours)
Reading List: 
  1. F. Golse, On the Dynamics of Large Particle Systems in the Mean Field Limit, Lecture Notes in Applied Mathematics and Mechanics 3. Springer, 2016.
  2. L. C. Evans, Weak convergence methods for nonlinear partial differential equations. CBMS Regional Conference Series in Mathematics 74, AMS, 1990.
  3. F. Santambrogio, Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling, Progress in Nonlinear Differential Equations and Their Applications, Birkhauser 2015.
  4. C. Villani, Topics in Optimal Transportation, AMS Graduate Studies in Mathematics, 2003

    Please note that e-book versions of many books in the reading lists can be found on SOLO

    Further Reading: 
    1. L. Ambrosio, G. Savare, Handbook of Differential Equations: Evolutionary Equations, Volume 3-1, 2007.
    2. C. Villani, Optimal Transport: Old and New, Springer 2009

    More Details

    General Prerequisites: Basic linear algebra (such as eigenvalues and eigenvectors of real matrices), multivariate real analysis (such as norms, inner products, multivariate linear and quadratic functions, basis) and multivariable calculus (such as Taylor expansions, multivariate differentiation, gradients).

    Course Overview: The solution of optimal decision-making and engineering design problems in which the objective and constraints are nonlinear functions of potentially (very) many variables is required on an everyday basis in the commercial and academic worlds. A closely-related subject is the solution of nonlinear systems of equations, also referred to as least-squares or data fitting problems that occur in almost every instance where observations or measurements are available for modelling a continuous process or phenomenon, such as in weather forecasting. The mathematical analysis of such optimization problems and of classical and modern methods for their solution are fundamental for understanding existing software and for developing new techniques for practical optimization problems at hand.

    more details: https://courses.maths.ox.ac.uk/course/view.php?id=734

    Learning Outcomes:

    Students will learn how some of the various different ensembles of random matrices are defined. They will encounter some examples of the applications these have in Data Science, modelling Complex Quantum Systems, Mathematical Finance, Network Models, Numerical Linear Algebra, and Population Dynamics. They will learn how to analyse eigenvalue statistics, and see connections with other areas of mathematics and physics, including combinatorics, number theory, and statistical mechanics.

    Course Synopsis: 

    Introduction to matrix ensembles, including Wigner and Wishart random matrices, and the Gaussian and Circular Ensembles. Overview of connections with Data Science, Complex Quantum Systems, Mathematical Finance, Network Models, Numerical Linear Algebra, and Population Dynamics (1 Lecture)

    Statement and proof of Wigner’s Semicircle Law; statement of Girko’s Circular Law; applications to Population Dynamics (May’s model). (3 lectures)

    Statement and proof of the Marchenko-Pastur Law using the Stieltjes and R-transforms; applications to Data Science and Mathematical Finance. (3 lectures)

    Derivation of the Joint Eigenvalue Probability Density for the Gaussian and Circular Ensembles;
    method of orthogonal polynomials; applications to eigenvalue statistics in the large-matric limit;
    behaviour in the bulk and at the edge of the spectrum; universality; applications to Numerical Linear
    Algebra and Complex Quantum Systems (5 lectures)

    Dyson Brownian Motion (2 lectures)

    Connections to other problems in mathematics, including the longest increasing subsequence
    problem; distribution of zeros of the Riemann zeta-function; topological genus expansions. (2
    lectures)

    Reading List: 
    1. ML Mehta, Random Matrices (Elsevier, Pure and Applied Mathematics Series)
    2. GW Anderson, A Guionnet, O Zeitouni, An Introduction to Random Matrices (Cambridge Studies in Advanced Mathematics)
    3. ES Meckes, The Random Matrix Theory of the Classical Compact Groups (Cambridge University Press)
    4. G. Akemann, J. Baik & P. Di Francesco, The Oxford Handbook of Random Matrix Theory (Oxford University Press)
    5. G. Livan, M. Novaes & P. Vivo, Introduction to Random Matrices (Springer Briefs in Mathematical Physics)

    Please note that e-book versions of many books in the reading lists can be found on SOLO

    Further Reading: 
    1. T. Tao, Topics in Random Matrix Theory (AMS Graduate Studies in Mathematics)

    More Details

    General Prerequisites: Integration and measure theory, martingales in discrete and continuous time, stochastic calculus. Functional analysis is useful but not essential.

    Course Overview: Stochastic analysis and partial differential equations are intricately connected. This is exemplified by the celebrated deep connections between Brownian motion and the classical heat equation, but this is only a very special case of a general phenomenon. We explore some of these connections, illustrating the benefits to both analysis and probability.

    Course Synopsis: Feller processes and semigroups. Resolvents and generators. Hille-Yosida Theorem (without proof). Diffusions and elliptic operators, convergence and approximation. Stochastic differential equations and martingale problems. Duality. Speed and scale for one dimensional diffusions. Green's functions as occupation densities. The Dirichlet and Poisson problems. Feynman-Kac formula.

    More details: https://courses.maths.ox.ac.uk/course/view.php?id=728

    General Prerequisites: Part B Graph Theory and Part A Probability. C8.3 Combinatorics is not as essential prerequisite for this course, though it is a natural companion for it.

    Course Overview: Probabilistic combinatorics is a very active field of mathematics, with connections to other areas such as computer science and statistical physics. Probabilistic methods are essential for the study of random discrete structures and for the analysis of algorithms, but they can also provide a powerful and beautiful approach for answering deterministic questions. The aim of this course is to introduce some fundamental probabilistic tools and present a few applications.

    Course Synopsis: First-moment method, with applications to Ramsey numbers, and to graphs of high girth and high chromatic number. Second-moment method, threshold functions for random graphs. Lovász Local Lemma, with applications to two-colourings of hypergraphs, and to Ramsey numbers. Chernoff bounds, concentration of measure, Janson's inequality. Branching processes and the phase transition in random graphs. Clique and chromatic numbers of random graphs.

    More details: https://courses.maths.ox.ac.uk/course/view.php?id=158

    Optimisation problems occur naturally in portfolio selection, risk management and algorithmic trading. Participants of this course learn from basic concepts and tricks in formulation of optimisation problems into specific form through a series of example models from mathematical finance, and solve those problems by software. The course will also cover classical optimisation algorithm and the duality theorem for general convex optimisation.
     

    Course Synopsis:
    [1] Optimisation terminology, classification of problems, optimality conditions.
    [2] Convex and Conic programming modelling.
    [3] Lift-and-Project idea and examples.
    [4] Newton's method.
    [5] Penalty method and barrier method for convex programming
    [6] Optimisation with constraints Convex duality.
    [7] Robust portfolio optimisation.

    Literature (TBC)

    • S.J.Wright. “Optimization Algorithms for Data Analysis”, http://www.optimization-online.org/DB_FILE/2016/12/5748.pdf
    • L. Bottou, F.E. Curtis, and J. Nocedal. “Optimization methods for large-scale machine learning. SIAM Review, 59(1): 65-98, 2017.
    • Z. Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. The Journal of Machine Learning Research, 18(1): 8194-8244, 2017.

    Read more

    Department of Statistics, University of Oxford

    Expand All

    This course runs Oct-Dec, but it may be possible to follow the course via pre-recorded videos and prepare an assessment.

    Aims and Objectives: Many data come in the form of networks, for example friendship data and protein-protein interaction data. As the data usually cannot be modelled using simple independence assumptions, their statistical analysis provides many challenges. The course will give an introduction to the main problems and the main statistical techniques used in this field. The techniques are applicable to a wide range of complex problems. The statistical analysis benefits from insights which stem from probabilistic modelling, and the course will combine both aspects.

    Synopsis:

    Exploratory analysis of networks. The need for network summaries. Degree distribution, clustering coefficient, shortest path length. Motifs.

    Probabilistic models: Bernoulli random graphs, geometric random graphs, preferential attachment models, small world networks, inhomogeneous random graphs, exponential random graphs.

    Small subgraphs: Stein’s method for normal and Poisson approximation. Branching process approximations, threshold behaviour, shortest path between two vertices.

    Statistical analysis of networks: Sampling from networks. Parameter estimation for models. Inference from networks: vertex characteristics and missing edges. Nonparametric graph comparison: subgraph counts, subsampling schemes, MCMC methods. A brief look at community detection.

    More details:

    Reading: R. Durrett, Random Graph Dynamics, Cambridge University Press,2007

    E.D Kolaczyk and G. Csádi, Statistical Analysis of Network Data with R, Springer, 2014

    M. Newman, Networks: An Introduction. Oxford University Press, 2010

    Recommended Prerequisites: The course requires a good level of mathematical maturity. Students are expected to be familiar with core concepts in statistics (regression models, bias-variance tradeoff, Bayesian inference), probability (multivariate distributions, conditioning) and linear algebra (matrix-vector operations, eigenvalues and eigenvectors). Previous exposure to machine learning (empirical risk minimisation, dimensionality reduction, overfitting, regularisation) is highly recommended. Students would also benefit from being familiar with the material covered in the following courses offered in the Statistics department: SB2.1 (formerly SB2a) Foundations of Statistical Inference and in SB2.2 (formerly SB2b) Statistical Machine Learning.

    Aims and Objectives: Machine learning is widely used across many scientific and engineering disciplines, to construct methods to find interesting patterns and to predict accurately in large datasets. This course introduces several widely used data machine learning techniques and describes their underpinning statistical principles and properties. The course studies both unsupervised and supervised learning and several advanced topics are covered in detail, including some state-of-the-art machine learning techniques. The course will also cover computational considerations of machine learning algorithms and how they can scale to large datasets.

     

    More details: SC4 Advanced Topics in Statistical Machine Learning

    Synopsis: Empirical risk minimisation. Loss functions. Generalization. Over- and under-fitting. Bias and variance. Regularisation.

    Support vector machines.

    Kernel methods and reproducing kernel Hilbert spaces. Representer theorem. Representation of probabilities in RKHS.

    Deep learning: Neural networks. Computation graphs. Automatic differentiation. Stochastic gradient descent.

    Probabilistic and Bayesian machine learning: Fundamentals of the Bayesian approach. EM algorithm. Variational inference. Latent variable models.

    Deep generative models. Variational auto-encoders.

    Gaussian processes. Bayesian optimisation.

    Software: Knowledge of Python is not required for this course, but some examples may be done in Python. Students interested in learning Python are referred to the following free University IT online course, which should ideally be taken before the beginning of this course: https://skills.it.ox.ac.uk/whats-on#/course/LY046

    Reading: C. Bishop, Pattern Recognition and Machine Learning, Springer,2007

    K. Murphy, Machine Learning: a Probabilistic Perspective, MIT Press, 2012

    Further Reading: T. Hastie, R. Tibshirani, J Friedman, Elements of Statistical Learning, Springer, 2009

    Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011, http://scikit-learn.org/stable/tutorial/

    Imperial College London

    Expand All

    Brief Description
    This module provides an hands-on introduction to the methods of modern data science. Through interactive lectures, the student will be introduced to data visualisation and analysis as well as the fundamentals of machine learning.

    Learning Outcomes
    On successful completion of this module, you will be able to:
    - Visualise and explore data using computational tools;
    - Appreciate the fundamental concepts and challenges of learning from data;
    - Analyse some commonly used learning methods;
    - Compare learning methods and determine suitability for a given problem;
    - Describe the principles and differences between supervised and unsupervised learning;
    - Clearly and succinctly communicate the results of a data analysis or learning application;
    - Appraise and evaluate new algorithms and computational methods presented in scientific and mathematical journals;- Design and implement newly-developed algorithms and methods.

    Module Content
    The module is composed of the following sections:
    - Introduction to computational tools for data analysis and visualisation;
    - Introduction to exploratory data analysis;
    - Mathematical challenges in learning from data: optimisation;
    - Methods in Machine Learning: supervised and unsupervised; neural networks and deep learning; graph-based data learning;
    - Machine learning in practice: application of commonly used methods to data science problems.
    Methods include: regressions, k-nearest neighbours, random forests, support vector machines, neural networks, principal component analysis, k-means, spectral clustering, manifold learning, network statistics, community detection;
    - Current research questions in data analysis and machine learning and associated numerical methods.

    Brief Description
    This is a course on the theory and applications of random dynamical systems and ergodic theory.
    Random dynamical systems are deterministic dynamical systems driven by a random input. The goal will be to present a solid introduction to the subject and then to touch upon several more advanced developments in this field.

    Learning Outcomes

    On successful completion of this module, you will be able to:
    - describe the fundamental concepts of random dynamical systems;
    - summarize the ergodic theory of random dynamical systems;
    - select and critically appraise relevant research papers and chapters of research monographs;

    - combine the ideas contained in such papers to provide a written overview of the current state of affairs concerning a particular aspect of random dynamical systems theory;
    - thoughtfully engage orally in discussions related to random dynamical systems.

    Module Content
    Introductory lectures include foundational material on:
    - Invariant measures and ergodic theory
    - Random (pullback) attractors
    - Lyapunov exponents
    - Random circle homeomorphisms
    Further material is at a more advanced level, touching upon current frontline research. Students select material from research level articles or book chapters.

    Brief Description
    This module introduces students to the analysis and implementation of efficient algorithms used to solve mathematical and computational problems connected to a broad range of scientific topics. Mathematical tools and concepts from linear algebra, calculus, numerical analysis, and statistics will be utilised to develop and analyse computational solutions to mathematical and scientific problems.The objectives are that by the end of the module all students should have a good familiarity with the essential elements of the Python programming language and be able to undertake programming tasks in a range of areas.
    Learning Outcomes
    On successful completion of this module you will be able to:
    - analyse the performance of simple sorting and searching algorithms and implement them in Python;
    - computationally analyse complex networks and dynamical processes of complex systems;
    - effectively utilise important tools for data analysis such as discrete Fourier transforms;
    - evaluate and implement numerical methods for mathematical optimisation and the solution of differential equations;
    - assess the correctness and efficiency of simple data structures and algorithms on graphs and implement them in Python;

    - independently appraise and evaluate a range of state-of-the art algorithms and computational methods;
    - adapt a range of computational methods and apply them in a coherent manner to an open scientific problem.
    Module Content
    The module will cover the following topics:
    1) Sorting and searching with scientific applications from fields such as bioinformatics;
    2) Algorithms on graphs and basic data structures such as queues and hash tables;
    3) Methods for data analysis using tools such as discrete Fourier transforms;
    4) Analysis and use of common optimisation methods such as Simulated Annealing;
    5) Numerical solution of differential equations arising in multiscale problems;
    6) Computational analysis of complex systems.

    The module covers both the theoretical underpinnings of convex optimisation and its applications to important problems in mathematical finance.

    A brief outline of the course reads as follows:

    • Fundamental properties of convex sets and convex functions

    • The basics of convex optimisation with special emphasis on duality theory

    • Markowitz portfolio theory and the CAPM model

    • Expected utility maximisation and no arbitrage

    • Convexity in continuous time hedging

    The goal of the module is to develop thorough understanding of how form, information is aggregated, and trades occur in financial markets. The main market types will be described as well as traders’ main motives for why they trade. Market manipulation and high-frequency trading strategies have received a lot of attention in the press recently, so the module will illustrate them and examine recent developments in regulations that aim to limit them.Liquidity is a key theme in market microstructure, and the students will learn how to measure it and to recognise the recent increase in liquidity fragmentation and hidden, “dark” liquidity. The Flash Crash of 6 May 2010 will be analysed as a case study of sudden loss of liquidity.

    The increase in computer power over the last decades has given rise to prices being quoted and stocks being traded at an ever-increasing pace. Since humans are not able to place orders at this speed, algorithms have replaced classical traders to optimise portfolios and investments. In this module, we will study specificities of this market, and in particular, we shall develop the mathematical tools required to develop such algorithms in this high-frequency framework. The module will start with a short review of stochastic optimal control, which forms the mathematical background. We shall then move on to study optimal execution, namely how and when to place buy/sell orders in this market, both assuming continuous trading and in the context of limit and market orders. The last part of the module will be dedicated to the concept of market making and statistical arbitrage in high-frequency settings.

     

    Brief Description
    This module will introduce a variety of computational approaches for solving partial differential equations, focusing mostly on finite difference methods, but also touching on finite volume and spectral methods. Students will gain experience implementing the methods and writing/modifying short programs in Matlab or another programming language of their choice. Applications will be drawn from problems arising in areas such as Mathematical Biology and Fluid Dynamics.
    Learning Outcomes
    On successful completion of this module, you will be able to:
    - appreciate the physical and mathematical differences between different types of PDES;
    - design suitable finite difference methods to solve each type of PDE;
    - outline a theoretical approach to testing the stability of a given algorithm;

    - determine the order of convergence of a given algorithm;
    - demonstrate familiarity with the implementation and rationale of multigrid methods;
    - develop finite-difference based software for use on research level problems;
    - communicate your research findings as a poster, in a form suitable for presentation at a scientific conference.
    Module Content
    The module will cover the following topics:
    1) Introduction to Finite Differences
    2) Classificaton of PDEs
    3) Explicit and Implicit methods for Parabolic PDEs
    4) Iterative Methods for Elliptic PDEs. Jacobi, Gauss-Seidel, Overrelaxation
    5) Multigrid Methods
    6) Hyperbolic PDEs. Nonlinear Advection/Diffusion systems. Waves and PMLs as well as various advanced practical topics from Fluid Dynamics, which will depend on the final project.

    Brief Description
    The main aim of this module is to understand geodesics and curvature and the relationship between them. Using these ideas we will show how local geometric conditions can lead to global topological constraints.
    Learning Outcomes
    On successful completion of this module, you will be able to:
    - Understand the relevant structures required to make sense of differential topological notions, such as derivatives of smooth functions, and geometric notions, such as lengths and angles, on an abstract manifold.
    - Define the Lie derivative and covariant derivative of a tensor field.
    - Define geodesics and understand their length minimising properties.
    - Define and interpret various measures of the curvature of a Riemannian manifold.
    - Understand the effect of curvature on neighbouring geodesics.
    - Prove the celebrated classical theorems of Bonnet--Myers and Cartan--Hadamard.

    Module Content
    An indicative list of topics is:
    Topological and smooth manifolds, tangent and cotangent spaces, vector bundles, tensor bundles, Lie bracket, Lie derivative, Riemannian metrics, affine connections, the Levi-Civita connection, parallel transport, geodesics, Riemannian distance, the exponential map, completeness and the Hopf--Rinow Theorem, Riemann and Ricci curvature tensors, scalar curvature, sectional curvatures, submanifolds, the second  fundamental form and the Gauss equation, Jacobi fields and the second variation of geodesics, the Bonnet--Myers and Cartan--Hadamard Theorems.

    Prerequisites:

    Geometry of Curves and Surfaces and Manifolds

    Finite element methods form a flexible class of techniques for numerical solution of PDEs that are both accurate and efficient.

    The finite element method is a core mathematical technique underpinning much of the development of simulation science. Applications are as diverse as the structural mechanics of buildings, the weather forecast, and pricing financial instruments. Finite element methods have a powerful mathematical abstraction based on the language of function spaces, inner products, norms and operators.

    This module aims to develop a deep understanding of the finite element method by spanning both its analysis and implementation. in the analysis part of the module you will employ the mathematical abstractions of the finite element method to analyse the existence, stability, and accuracy of numerical solutions to PDEs. At the same time, in the implementation part of the module you will combine these abstractions with modern software engineering tools to create and understand a computer implementation of the finite element method.

    Syllabus:

    • Basic concepts: Weak formulation of boundary value problems, Ritz-Galerkin approximation, error estimates, piecewise polynomial spaces, local estimates.

    • Efficient construction of finite element spaces in one dimension, 1D quadrature, global assembly of mass matrix and Laplace matrix.

    • Construction of a finite element space: Ciarlet’s finite element, various element types, finite element interpolants.

    • Construction of local bases for finite elements, efficient local assembly.

    • Sobolev Spaces: generalised derivatives, Sobolev norms and spaces, Sobolev’s inequality.

    • Numerical quadrature on simplices. Employing the pullback to integrate on a reference element.

    • Variational formulation of elliptic boundary value problems: Riesz representation theorem, symmetric and nonsymmetric variational problems, Lax-Milgram theorem, finite element approximation estimates.

    • Computational meshes: meshes as graphs of topological entities. Discrete function spaces on meshes, local and global numbering.

    • Global assembly for Poisson equation, implementation of boundary conditions. General approach for nonlinear elliptic PDEs.

    • Variational problems: Poisson’s equation, variational approximation of Poisson’s equation, elliptic regularity estimates, general second-order elliptic operators and their variational approximation.

    • Residual form, the Gâteaux derivative and techniques for nonlinear problems.

    The course is assessed 50% by examination and 50% by coursework (implementation exercise in Python).

    Brief Description
    The module offers a bespoke introduction to stochastic calculus required to cover the classical theoretical results of nonlinear filtering. The first part of the module will equip the students with the necessary knowledge (e.g., Ito Calculus, Stochastic Integration by Parts, Girsanov’s theorem) and skills (solving linear stochastic differential equation, analysing continuous martingales, etc) to handle a variety of applications. The focus will be on the use of stochastic calculus to the theory and numerical solution of nonlinear filtering.
    Learning Outcomes
    On successful completion of this module, you will be able to:

    a. understand the notion on Brownian motion and able to show that a stochastic process is a Brownian motion,

    b. Prove that a process is a martingale via Novikov's condition.

    c. Solve linear SDEs, d. Be able to check whetheran SDE is wellposed.
    d. understand the mathematical framework of nonlinear filtering

    e. Deduce the filtering equations.

    f. Deduce the evolution equation of the mean and variance of the one-dimensional Kalman-Bucy filter

    g. Show that the innovation process is a Brownian motion.

    h. Apply stochastic integration by parts.

    Module Content
    An indicative list of topics is:
    1. Martingales on Continuous Time (Doob Meyer decomposition, L_p bounds, Brownian motion,exponential martingales, semi-martingales, local martingales, Novikov’s condition)
    2. Stochastic Calculus (Ito’s isometry, chain rule, integration by parts)
    3. Stochastic Differential Equations (well posedness, linear SDEs, the Ornstein-Uhlenbeck process,Girsanov's Theorem)
    4. Stochastic Filtering (definition, mathematical model for the signal process and the observation process)
    5. The Filtering Equations (well-posedness, the innovation process, the Kalman-Bucy filter)

    Prerequisites: Ordinary differential equations, partial differential equations, real analysis, probability theory.

    Rough path theory was developed in the 1990s in order to understand the response of a nonlinear system to highly oscillatory input signal. A key element of this theory so-called signature transform which gives an economical way to represent and extract information from high dimensional ordered data, such as a complex financial time series. Over the last decade it has been used to achieve state-of-the art outcomes in several data science challenges. This short module will give an overview of the mathematical properties of the signature, explain how it can be used as a feature set in machine learning application with a particular emphasis on problems inspired by finance. Topics covered will include:

    • Key mathematical properties of the signature transform
    • The use of the signatures a feature set in machines learning. Two examples will be developed in detail to illustrate this: (a) learning a solution to a stochastic differential equation, and (b) learning a high-frequency trading strategy. Computational methods. Other examples will be explored in the coursework and as time permits.
    • Recovering information about a data stream from the signature, the asymptotic analysis of the signature. 
    • Signatures and kernel methods. Using signatures in neural networks. 

    References

    [1] D. Bakry, I. Gentil, and M. Ledoux. Analysis and geometry of Markov diffusion operators, volume 348 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Cham, 2014.

    [2] L. Bertini, G. Giacomin, and K. Pakdaman. Dynamical aspects of mean field plane rotators and the Kuramoto model. J. Stat. Phys., 138(1-3):270–290, 2010.

    [3] P.-H. Chavanis. The Brownian mean field model. Eur. Phys. J. B, 87(5):Art. 120, 33, 2014.

    [4] B. Fernandez and S. Meleard. A Hilbertian approach for fluctuations on the Mckean-Vlasov model. Stochastic processes and their applications, 71(1):33–53, 1997.

    [5] B. Helffer. Semiclassical analysis, Witten Laplacians, and statistical mechanics, volume 1 of Series in Partial Differential Equations and Applications. World Scientific Publishing Co., Inc., River Edge, NJ,2002.

    [6] M. Ledoux. Logarithmic Sobolev inequalities for unbounded spin systems revisited. In Seminaire de Probabilites, XXXV, volume 1755 of Lecture Notes in Math., pages 167–194. Springer, Berlin, 2001.

    [7] K. Oelschlager. A martingale approach to the law of large numbers for weakly interacting stochastic processes. Ann. Probab., 12(2):458–479, 1984.

    [8] G. A. Pavliotis. Stochastic processes and applications, volume 60 of Texts in Applied Mathematics.Springer, New York, 2014. Diffusion processes, the Fokker-Planck and Langevin equations.

    [9] A.-S. Sznitman. Topics in propagation of chaos. In Ecole d’Ete de Probabilites de Saint-Flour XIX—1989, volume 1464 of Lecture Notes in Math., pages 165–251. Springer, Berlin, 1991.

    The module introduces the latest advances in machine learning. We start with reinforcement learning and demonstrate how it can be combined with neural networks in deep reinforcement learning, which has achieved spectacular results in recent years, such as outplaying the human champion at Go. We also demonstrate how advanced neural networks and tree-based methods, such as decision trees and random forests, can be used for forecasting financial time series and generating alpha. We explain how these advances are related to Bayesian methods, such as particle filtering and Markov chain Monte Carlo. We apply these methods to set up a profitable algorithmic trading venture in cryptocurrencies using Python and kdb+/q (a top technology for electronic trading) along the way.

    Taught Course Centre

    The Taught Course Centre (TCC) is a collaboration between the Mathematics Departments at the Universities of Bath, Bristol, Imperial, Oxford, Warwick and Swansea. The TCC also collaborates with other departments through seminars and study groups. 

    The Centre offers graduate level courses over the academic year.

    All TCC lectures will be presented via MS Teams. In order to sign up for any of the lectures, students need to email tcc@maths.ox.ac.uk with their MS Teams email address.  See detailed instructions at the TCC web page.

    Expand All

    This is an introductory course about one of the most dynamically developing areas of modern probability theory, linked on many channels with various other chapters of mathematics and physics. Remarkable results achieved in this area have been recently recognised with three Fields Medals (Werner 2006, Smirnov 2010, Duminil-Copin 2022). The course will cover a wide spectrum of the theory. I will cover the following, providing full mathematical treatment of all main results: 

    • Phenomenology, geometry of random graphs, phase transition.
    • Elementary tools: stochastic ordering and basic correlation inequalities.
    • Supercritical behaviour: uniqueness of percolating cluster and regularity.
    • Subcritical behaviour: exponential decay of connectivity and sharpness of the phase transition.
    • Planar percolation: topological duality and its consequences.
    • Conformal invariance of critical planar percolation.
    • Outlook.

    Prerequisites

    • Basic probability theory (including stochastic independence, Markov property, laws of large numbers, 0-1 laws, ergodicity)
    • Basic analysis (including uniform convergence, Arzelà-Ascoli theorem)
    • Basic complex function theory (including Riemann’s conformal mapping theorem)

    However, those hesitating possible attendants who miss some of these elements should not be shy. The necessary background knowledge can be picked up on the way presuming enough basic mathematical interest and open mind.

     

    I kindly ask the potential attendees of my course to send me an informal email  balint.toth@bristol.ac.uk about their interest this unit.

    References

    [1] D. Bakry, I. Gentil, and M. Ledoux. Analysis and geometry of Markov diffusion operators, volume 348 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Cham, 2014.

    [2] L. Bertini, G. Giacomin, and K. Pakdaman. Dynamical aspects of mean field plane rotators and the Kuramoto model. J. Stat. Phys., 138(1-3):270–290, 2010.

    [3] P.-H. Chavanis. The Brownian mean field model. Eur. Phys. J. B, 87(5):Art. 120, 33, 2014.

    [4] B. Fernandez and S. Meleard. A Hilbertian approach for fluctuations on the Mckean-Vlasov model. Stochastic processes and their applications, 71(1):33–53, 1997.

    [5] B. Helffer. Semiclassical analysis, Witten Laplacians, and statistical mechanics, volume 1 of Series in Partial Differential Equations and Applications. World Scientific Publishing Co., Inc., River Edge, NJ,2002.

    [6] M. Ledoux. Logarithmic Sobolev inequalities for unbounded spin systems revisited. In Seminaire de Probabilites, XXXV, volume 1755 of Lecture Notes in Math., pages 167–194. Springer, Berlin, 2001.