Mean Field Games for Jump Non-Linear Markov Process
Abstract: The mean-field game theory is the study of strategic decision making in very large populations of weakly interacting individuals. Mean-field games have been an active area of research in the last decade due to its increased significance in many scientific fields. The foundations of mean-field theory go back to the theory of statistical and quantum physics. One may describe mean-field games as a type of stochastic differential game for which the interaction between the players is of mean-field type, i.e the players are coupled via their empirical measure. It was proposed by Larsy and Lions and independently by Huang, Malhame, and Caines. Since then, the mean-field games have become a rapidly growing area of research and has been studied by many researchers. However, most of these studies were dedicated to diffusion-type games. The main purpose of this thesis is to extend the theory of mean-field games to jump case in both discrete and continuous state space. Jump processes are a very important tool in many areas of applications. Specifically, when modeling abrupt events appearing in real life. For instance, financial modeling (option pricing and risk management), networks (electricity and Banks) and statistics (for modeling and analyzing spatial data). The thesis consists of two papers and one technical report which will be submitted soon:In the first publication, we study the mean-field game in a finite state space where the dynamics of the indistinguishable agents is governed by a controlled continuous time Markov chain. We have studied the control problem for a representative agent in the linear quadratic setting. A dynamic programming approach has been used to drive the Hamilton Jacobi Bellman equation, consequently, the optimal strategy has been achieved. The main result is to show that the individual optimal strategies for the mean-field game system represent 1/N-Nash equilibrium for the approximating system of N agents.As a second article, we generalize the previous results to agents driven by a non-linear pure jump Markov processes in Euclidean space. Mathematically, this means working with linear operators in Banach spaces adapted to the integro-differential operators of jump type and with non-linear partial differential equations instead of working with linear transformations in Euclidean spaces as in the first work. As a by-product, a generalization for the Koopman operator has been presented. In this setting, we studied the control problem in a more general sense, i.e. the cost function is not necessarily of linear quadratic form. We showed that the resulting unique optimal control is of Lipschitz type. Furthermore, a fixed point argument is presented in order to construct the approximate Nash Equilibrium. In addition, we show that the rate of convergence will be of special order as a result of utilizing a non-linear pure jump Markov process.In a third paper, we develop our approach to treat a more realistic case from a modelling perspective. In this step, we assume that all players are subject to an additional common noise of Brownian type. We especially study the well-posedness and the regularity for a jump version of the stochastic kinetic equation. Finally, we show that the solution of the master equation, which is a type of second order partial differential equation in the space of probability measures, provides an approximate Nash Equilibrium. This paper, unfortunately, has not been completely finished and it is still in preprint form. Hence, we have decided not to enclose it in the thesis. However, an outlook about the paper will be included.
CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)