Dynamics of market making algorithms in dealer markets

Xiong W

In the rapidly evolving landscape of electronic over-the-counter markets, the deployment of advanced market making algorithms has led to unprecedented efficiencies but also raised regulatory concerns, particularly regarding the potential for unforeseen effects, such as ‘algorithmic collusion’ or market instability, resulting from the impact of autonomous trading or market making algorithms. These concerns call for a better understanding of the impact of trading and market making algorithms on market dynamics. In this thesis, we propose a mathematical framework for studying the dynamics of algorithmic markets with a focus on the impact of competition, learning, and heterogeneity in the context of dealer markets. We model strategic interactions and competition among dealers using the framework of game theory—discrete time repeated games, stochastic differential games, and mean field games. The impact of learning by agents is introduced in this setting using the concept of reinforcement learning and implemented using decentralized deep reinforcement learning algorithms.

We initiate our investigation by constructing a game-theoretic model where multiple market makers compete for market share, adjusting their pricing spreads in response to evolving market conditions, learned autonomously through market data without direct communication. The learning dynamics are captured through a decentralized multi-agent reinforcement learning approach, revealing a propensity for these algorithms to independently converge to pricing strategies that, while not explicitly collusive, mirror the outcomes of tacit collusion by maintaining supra-competitive price levels.

We extend the analysis to a continuous-time setting, where the interactions of market makers are modelled as a stochastic differential game of intensity control under partial information. Competition among dealers corresponds to a Nash equilibrium, whereas collusion is described in terms of Pareto optima. This analytical exploration is further enriched by employing the decentralized multi-agent deep reinforcement learning algorithm, which unveils the latent pathways through which learning by market making algorithms can inadvertently lead to tacit collusion, pushing spread levels significantly above competitive equilibrium levels.

The final chapter extends these results to a large population of dealers, whose interactions are modelled as a mean field game where a representative dealer interacts with the quotes of other dealers. The benchmark situation representing competition among dealers corresponds to a mean field Nash equilibrium, for which we give conditions for existence and uniqueness. We investigate the influence of learning dynamics in this setting using mean field deep reinforcement learning. We show that, in a homogeneous population of dealers, learning can lead to supra-competitive quoting strategies, while the introduction of heterogeneity mitigates this effect.

Our theoretical results and detailed numerical experiments provide interesting perspectives on market dynamics in the age of algorithmic trading and offer insights for market participants, risk man- agers, regulators, and policy makers on the impact on market behavior of autonomous algorithmic strategies in electronic over-the- counter markets. Market participants may consider autonomous learning algorithms used for market making to generate quoting strategies, but they should remain cautious about potential algorithmic risks that could potentially affect the competitiveness of the market. For risk managers, these market making algorithms are shown to manage inventory risk effectively, as they have learned to adjust spreads based on inventory positions. However, they must be aware of the regulatory risks associated with potential tacit collusion resulting from the interactions of the learning algorithms. Regulators and policy makers are suggested to revisit the existing rules for best execution, enforce mandatory audits on the algorithms, and implement market frameworks that ensure transparency in learning algorithms and encourage competition.