June 2023 CDT Mathematics of Random Systems Workshop



Milena Vuletic, CDT Student, University of Oxford

Simulation of Arbitrage-Free Implied Volatility Surfaces

We present a computationally tractable method for simulating arbitrage-free implied volatility surfaces. We illustrate how our method may be combined with a factor model based on historical SPX implied volatility data to generate dynamic scenarios for arbitrage-free implied volatility surfaces. Our approach conciliates static arbitrage constraints with a realistic representation of statistical properties of implied volatility co-movements.


Nicola Muca Cirone, CDT Student, Imperial College London

Neural Signature Kernels

Motivated by the paradigm of reservoir computing, we consider randomly initialized controlled ResNets defined as Euler-discretizations of neural controlled differential equations (Neural CDEs), a unified architecture which enconpasses both RNNs and ResNets. We show that in the infinite-width-depth limit and under proper scaling, these architectures converge weakly to Gaussian processes indexed on some spaces of continuous paths and with kernels satisfying certain partial differential equations (PDEs) varying according to the choice of activation function, extending the results of Hayou (2022); Hayou & Yang (2023) to the controlled and homogeneous case. In the special, homogeneous, case where the activation is the identity, we show that the equation reduces to a linear PDE and the limiting kernel agrees with the signature kernel of Salvi et al. (2021a). We name this new family of limiting kernels neural signature kernels. Finally, we show that in the infinite-depth regime, finite-width controlled ResNets converge in distribution to Neural CDEs with random vector fields which, depending on whether the weights are shared across layers, are either time-independent and Gaussian or behave like a matrix-valued Brownian motion.

2:30   Break

Renyuan Xu, Assistant Professor, University of Southern California

Reversible and Irreversible Decisions under Costly Information Acquisition

Many real-world analytics problems involve two significant challenges: estimation and optimization. Due to the typically complex nature of each challenge, the standard paradigm is estimate-then-optimize. By and large, machine learning or human learning tools are intended to minimize estimation error and do not account for how the estimations will be used in the downstream optimization problem (such as decision-making problems). In contrast, there is a line of literature in economics focusing on exploring the optimal way to acquire information and learn dynamically to facilitate decision-making. However, most of the decision-making problems considered in this line of work are static (i.e., one-shot) problems which over-simplify the structures of many real-world problems that require dynamic or sequential decisions.

As a preliminary attempt to introduce more complex downstream decision-making problems after learning and to investigate how downstream tasks affect the learning behavior, we consider a simple example where a decision maker (DM) chooses between two products, an established product A with known return and a newly introduced product B with an unknown return. The DM will make an initial choice between A and B after learning about product B for some time. Importantly, our framework allows the DM to switch to Product A later on at a cost if Product B is selected as the initial choice. We establish the general theory and investigate the analytical structure of the problem through the lens of the Hamilton—Jacobi—Bellman equation and viscosity solutions. We then discuss how model parameters and the opportunity to reverse affect the learning behavior of the DM.

This is based on joint work with Thaleia Zariphopoulou and Luhao Zhang from UT Austin.