Berlin-Oxford Summer School in Mathematics of Random Systems 2024

The Berlin-Oxford Summer School in Mathematics of Random Systems 2024 is jointly organised by the the Berlin-Oxford IRTG group and the EPSRC CDT in Mathematics of Random Systems. The Summer School will be held at St Hilda's College and the Mathematical Institute in Oxford. In addition to the lecture courses, there will  additional invited talks by guest lecturers and presentations by selected PhD students.

Event Timetable

The full Summer School timetable is available here.

Monday, 9th September

Registration and Lectures from 10:00 to 17:20 @ Rooftop Garden Suite, St Hilda’s College

Tuesday, 10th September

Lectures from 09:30 to 16:40 @ 14-16 Norham Gardens, OX2 6QB (previously the Cherwell Center)

Wednesday 11th September

Lectures from 09:30 to 16:20 @ Rooftop Garden Suite, St Hilda’s College

Local walk from 16:20 for approximately an hour

Conference dinner at 19:00 @ St Hilda's Dining Hall

Thursday 12th September

Lectures from 09:30 to 15:00 @ Riverside Pavilion, St Hilda’s College

Punting Trip/Botanic Garden visit at 16:00 @ Magdalen Bridge Boathouse

Friday 13th September

Lectures and closing talk from 09:30 to 12:50 @ L4, Mathematical Institute, Andrew Wiles Building, Oxford

St Giles' Fair

The Annual St. Giles' Fair is taking place on Monday and Tuesday, 9-10 September 2024. You can find the notice from Oxford city council here, and further details about the fair here.  

Invited Lecturers

Professor Huyen Pham (Université Paris Cité) 

Machine learning and stochastic control

professor pham

​​​​​

Professor Ellen Powell (University of Durham)

The Gaussian Free Field

professor powell

 

 

berlin irtg
cdt square logo

Organising Committee

Peter Bank (Technische Universitaet Berlin)
Ben Hambly (University of Oxford)

Venues: St Hilda's College and Mathematical Institute, Oxford

 

Lecture Courses

Professor Huyen Pham (Paris)Machine learning and stochastic control

This course will present some recent developments on the interplay between  control and machine learning.  More precisely, we shall address the following topics: 

Part I:  Neural networks-based algorithms  for PDEs and stochastic control. 

Deep learning based on the approximation capability of neural networks and efficiency of gradient descent optimizers has shown remarkable success in  recent years  for solving high dimensional partial differential equations (PDEs) arising notably in stochastic optimal control.  We present the different methods that have been developed in the literature relying either on deterministic or probabilistic approaches: - Deep Galerkin, Physics informed Neural networks - Deep BSDEs and Deep Backward dynamic programming. 

Part II: Deep reinforcement learning. 

The second part of the lecture is concerned with the resolution of stochastic control in a model-free setting, i.e. when the environment and model coefficients are unknown, and optimal strategies are learnt from samples observation of state and reward by trial and error.  This is the principle of reinforcement learning (RL), a classical topic in machine learning, and which has attracted an increasing interest in the stochastic analysis/control community. We shall review the basics of RL theory, and present the latest developments on policy gradients, actor/critic and q-learning methods in continuous time. 

Part III: Generative modeling for time series via optimal transport approach. 

We present novel generative models based on diffusion processes and optimal transport approach for simulating new samples of times series data distribution.

Professor Ellen Powell (Durham) - The Gaussian Free Field

One simple way to think of the Gaussian Free Field (GFF) is that it is the most natural and tractable model for a random function defined on either a discrete graph (each vertex of the graph is assigned a random real-valued height, and the distribution favours configurations where neighbouring vertices have similar heights) or on a subdomain of Euclidean space. The goal of these lectures is to give an elementary, self-contained introduction to both of these models, and highlight some of their main properties. We will start with a gentle introduction to the discrete GFF, and discuss its various resampling properties and decompositions. We will then move on to the continuum GFF, which can be obtained as an appropriate limit of the discrete GFF when it is defined on a sequence of increasingly fine graphs. We will explain what sort of random object (i.e, generalised function) it actually is, and how to make sense of various properties that generalise those of the discrete GFF.

Reference:

Wendelin WernerEllen Powell: Lecture notes on the Gaussian Free Field

https://arxiv.org/abs/2004.04720

https://bookstore.ams.org/COSP/28