Oxford-Berlin Summer School in Mathematics of Random Systems

The 2024 Oxford-Berlin Summer School in Stochastic Analysis is jointly organised by the the Berlin-Oxford IRTG group and the EPSRC CDT in Mathematics of Random Systems. The Summer School will be held at St Hilda's College and the Mathematical Institute in Oxford. In addition to the lecture courses, there will  additional invited talks by guest lecturers and presentations by selected PhD students.

The lecturers for the programme will be Professor Huyen Pham (Paris) and Professor Ellen Powell (Durham).

Lecture 1 by Professor Huyen Pham

Machine learning and stochastic control   

This course will present some recent developments on the interplay between  control and machine learning.  More precisely, we shall address the following topics: 

Part I:  Neural networks-based algorithms  for PDEs and stochastic control. 

Deep learning based on the approximation capability of neural networks and efficiency of gradient descent optimizers has shown remarkable success in  recent years  for solving high dimensional partial differential equations (PDEs) arising notably in stochastic optimal control.  We present the different methods that have been developed in the literature relying either on deterministic or probabilistic approaches: - Deep Galerkin, Physics informed Neural networks - Deep BSDEs and Deep Backward dynamic programming. 

Part II: Deep reinforcement learning. 

The second part of the lecture is concerned with the resolution of stochastic control in a model-free setting, i.e. when the environment and model coefficients are unknown, and optimal strategies are learnt from samples observation of state and reward by trial and error.  This is the principle of reinforcement learning (RL), a classical topic in machine learning, and which has attracted an increasing interest in the stochastic analysis/control community. We shall review the basics of RL theory, and present the latest developments on policy gradients, actor/critic and q-learning methods in continuous time. 

Part III: Generative modeling for time series via optimal transport approach. 

We present novel generative models based on diffusion processes and optimal transport approach for simulating new samples of times series data distribution.

Lecture 2 by Professor Ellen Powell

Notes based on "Lecture Notes on the Gaussian Free Field” by Wendelin Werner and Ellen Powell

One simple way to think of the Gaussian Free Field (GFF) is that it is the most natural and tractable model for a random function defined on either a discrete graph (each vertex of the graph is assigned a random real-valued height, and the distribution favours configurations where neighbouring vertices have similar heights) or on a subdomain of Euclidean space. The goal of these lectures is to give an elementary, self-contained introduction to both of these models, and highlight some of their main properties. We will start with a gentle introduction to the discrete GFF, and discuss its various resampling properties and decompositions. We will then move on to the continuum GFF, which can be obtained as an appropriate limit of the discrete GFF when it is defined on a sequence of increasingly fine graphs. We will explain what sort of random object (i.e, generalised function) it actually is, and how to make sense of various properties that generalise those of the discrete GFF.

Reference:

Wendelin WernerEllen Powell

Lecture notes on the Gaussian Free Field

https://arxiv.org/abs/2004.04720

https://bookstore.ams.org/COSP/28