A Probabilistic Approach to Non-Markovian Impulse Control

Abstract: This thesis treats mathematical considerations that arise in relation to certain stochastic optimal control problems, in particular of switching and impulse type. Both of these problems are extensions of the well-known optimal stopping problem. The optimal stopping problem amounts to finding the optimal stopping rule for a payoff that evolves in a random manner. In this case the control is merely a stopping time, making it one of the most primitive stochastic control problems.The control in the above-mentioned extensions of optimal stopping takes the formof a double sequence (??, ??), where ?? are stopping times and ?? are random variables. In the case of optimal switching, we switch "mode" at each stopping time ?? according to the discrete random variable ??, while in the case of impulse control these variables take values in a compact set and represent impulses with which we hit the system, causing it to "jump". As in the case of optimal stopping, the goal is to find a control that maximizes a pre-defined performance measure. Generally speaking, breaking control problems down into smaller ones is known as the Bellman principle, which we establish to be applicable in our settings.The problem we consider in Paper 1 is an impulse problem, where on the one hand the control enters the volatility, and on the other our underlying system is non-Markovian. Paper 2 explores a Feynman-Kac type formula for the problem in Paper1. In short, we establish the classic correspondence between conditional expectations and partial differential equations. In this case, the conditional expectation in question is the expected profit for the impulse problem. Paper 3 treats a particular non-Markovian switching problem with signed costs. Paper 3 treats a particular non-Markovian switching problem with signed costs.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)