June 2023 CDT Mathematics of Random Systems Workshop
1:30 
Milena Vuletic, CDT Student, University of Oxford Simulation of ArbitrageFree Implied Volatility Surfaces We present a computationally tractable method for simulating arbitragefree implied volatility surfaces. We illustrate how our method may be combined with a factor model based on historical SPX implied volatility data to generate dynamic scenarios for arbitragefree implied volatility surfaces. Our approach conciliates static arbitrage constraints with a realistic representation of statistical properties of implied volatility comovements. 

2:00 
Nicola Muca Cirone, CDT Student, Imperial College London Neural Signature Kernels Motivated by the paradigm of reservoir computing, we consider randomly initialized controlled ResNets defined as Eulerdiscretizations of neural controlled differential equations (Neural CDEs), a unified architecture which enconpasses both RNNs and ResNets. We show that in the infinitewidthdepth limit and under proper scaling, these architectures converge weakly to Gaussian processes indexed on some spaces of continuous paths and with kernels satisfying certain partial differential equations (PDEs) varying according to the choice of activation function, extending the results of Hayou (2022); Hayou & Yang (2023) to the controlled and homogeneous case. In the special, homogeneous, case where the activation is the identity, we show that the equation reduces to a linear PDE and the limiting kernel agrees with the signature kernel of Salvi et al. (2021a). We name this new family of limiting kernels neural signature kernels. Finally, we show that in the infinitedepth regime, finitewidth controlled ResNets converge in distribution to Neural CDEs with random vector fields which, depending on whether the weights are shared across layers, are either timeindependent and Gaussian or behave like a matrixvalued Brownian motion. 

2:30  Break  
2:503:50 
Renyuan Xu, Assistant Professor, University of Southern California Reversible and Irreversible Decisions under Costly Information Acquisition Many realworld analytics problems involve two significant challenges: estimation and optimization. Due to the typically complex nature of each challenge, the standard paradigm is estimatethenoptimize. By and large, machine learning or human learning tools are intended to minimize estimation error and do not account for how the estimations will be used in the downstream optimization problem (such as decisionmaking problems). In contrast, there is a line of literature in economics focusing on exploring the optimal way to acquire information and learn dynamically to facilitate decisionmaking. However, most of the decisionmaking problems considered in this line of work are static (i.e., oneshot) problems which oversimplify the structures of many realworld problems that require dynamic or sequential decisions. As a preliminary attempt to introduce more complex downstream decisionmaking problems after learning and to investigate how downstream tasks affect the learning behavior, we consider a simple example where a decision maker (DM) chooses between two products, an established product A with known return and a newly introduced product B with an unknown return. The DM will make an initial choice between A and B after learning about product B for some time. Importantly, our framework allows the DM to switch to Product A later on at a cost if Product B is selected as the initial choice. We establish the general theory and investigate the analytical structure of the problem through the lens of the Hamilton—Jacobi—Bellman equation and viscosity solutions. We then discuss how model parameters and the opportunity to reverse affect the learning behavior of the DM. This is based on joint work with Thaleia Zariphopoulou and Luhao Zhang from UT Austin. 