Solution of linear programming and non-linear regression problems using linear M-estimation methods

Abstract: This thesis is devoted to algorithms for solving two optimization problems, using linear M-estimation methods, and their implementation. First, an algorithm for the non-linear M-estimation problem is considered. The main idea of the algorithm is to linearize the residual function in each iteration and thus calculate the iteration step by solving a linear M- estimation problem. A 2-norm bound on the variables restricts the step size, to guarantee convergence. The other algorithm solves the dual linear programming problem by making a ``smooth'' approximation of edges and inequality constraints using quadratic functions, thus making it possible to use Newton's method to find the optimal solution. The quadratic approximation of the inequality constraint makes it a penalty function algorithm. The implementation uses sparse matrix techniques. Since it is an active set method, it is possible to reuse the old factor when calculating the new step, by up- and downdating the old factor. It is only occasionally, when the downdating fails, that the factor instead has to be found with a sparse multifrontal LQ-factorization.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)