Chargement Évènements

« Tous les Évènements

  • Cet évènement est passé

Xavier Fontaine : Sequential learning and stochastic optimization of convex functions

11 décembre 2020 @ 14:00 - 17:00

Résumé : Stochastic optimization algorithms are a central tool in machine learning. They are typically used to minimize a loss function, learn hyperparameters and derive optimal strategies. In this thesis we study several machine learning problems that are all linked with the minimization of a noisy function, which will often be convex. Inspired by real-life applications we choose to focus on sequential learning problems which consist in situations where the data has to be treated « on the fly » i.e., in an online manner. The first part of this thesis is thus devoted to the study of three different sequential learning problems which all face the classical « exploration vs. exploitation » trade-off. In each of these problems a decision maker has to take actions in order to maximize a reward or to evaluate a parameter under uncertainty, meaning that the rewards or the feedback of the possible actions are unknown and noisy. The optimization task has therefore to be conducted while estimating the unknown parameters of the feedback functions, which makes those problems difficult and interesting. As in many sequential learning problems we are interested in minimizing the regret of the algorithms we propose i.e., minimizing the difference between the achieved reward and the best possible reward that can be done with the knowledge of the feedback functions. We demonstrate that all of these problems can be studied under the scope of stochastic convex optimization, and we propose and analyze algorithms to solve them. We derive for these algorithms minimax convergence rates using techniques from both the stochastic convex optimization field and the bandit learning literature. In the second part of this thesis we focus on the analysis of the Stochastic Gradient Descent (SGD) algorithm, which is likely one of the most used stochastic optimization algorithms in machine learning. We provide an exhaustive analysis in the convex setting and in some non-convex situations by studying the associated continuous-time model. The new analysis we propose consists in taking an appropriate energy function to derive convergence results for the continuous-time model using stochastic calculus, and then in transposing this analysis to the discrete case by using a similar discrete energy function. The insights gained by the continuous case help to design the proof in the discrete setting, which is generally more intricate. This analysis provides simpler proofs than existing methods and allows us to obtain new optimal convergence results in the convex setting without averaging as well as new convergence results in the weakly quasi-convex setting. Our method emphasizes the links between the continuous and discrete models by presenting similar statements of the theorems as well as proofs with the same structure.

Sous la direction de Vianney Perchet

Titre : Sequential learning and stochastic optimization of convex functions

Détails

Date :
11 décembre 2020
Heure :
14:00 - 17:00
Catégorie d’évènement: