Inference and Online Learning in Structured Stochastic Systems

Abstract: This thesis contributes to the field of stochastic online learning problems, with a collection of six papers each addressing unique aspects of online learning and inference problems under specific structures. The first four papers focus on exploration and inference problems, uncovering fundamental information-theoretic limits and efficient algorithms under various structures. The last two papers focus on maximizing rewards by efficiently leveraging these structures.The first paper addresses the complex problem of learning to cluster items based on binary user feedback for multiple questions. It establishes information-theoretical error lower bounds for both uniform and adaptive selection strategies under a fixed budget of rounds or users, and proposes an adaptive algorithm that efficiently allocates the budget.The second paper tackles the challenge of uncovering hidden communities in the Labeled Stochastic Block Model using single-shot observations of labels. It introduces a computationally efficient algorithm, Instance-Adaptive Clustering, which is the first to match instance-specific lower bounds on the expected number of misclassified items.The third paper delves into the best-arm identification or simple regret minimization problem within a Bayesian setting. It takes into consideration a prior distribution for the bandit problem and the expectation of simple regret with respect to that distribution, defining it as Bayesian simple regret.It characterizes the rate of Bayesian simple regret assuming certain continuity conditions on the prior, revealing that the leading term of Bayesian simple regret stems from parameters where the gap between optimal and suboptimal actions is less than . The fourth paper contributes to the fixed budget best-arm identification problem for two-arm bandits with Bernoulli rewards. It demonstrates the optimality of uniform sampling, which evenly samples the arms.It proves that no algorithm can outperform uniform sampling while being at least as good as uniform sampling for some bandit instances.The fifth paper revisits the regret minimization problem in sparse stochastic contextual linear bandits. It introduces a new algorithm, the Thresholded Lasso Bandit, which estimates the linear reward function and its sparse support, and then selects an arm based on these estimations. The algorithm achieves superior regret upper bounds compared to previous algorithms and numerically outperforms them.The sixth and final paper provides a theoretical analysis of recommendation systems in an online setting under unknown user-item preference probabilities and some structures. It derives regret lower bounds based on various structural assumptions and designs optimal algorithms that achieve these bounds. The analysis reveals the relative weights of the different components of regret, providing valuable insights into the efficient algorithms for online recommendation systems.This thesis addresses the technical challenge of structured stochastic online learning problems, providing new insights into the power and limitations of adaptivity in these problems.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)