BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Eurandom - ECPv5.10.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Eurandom
X-ORIGINAL-URL:https://www.eurandom.tue.nl
X-WR-CALDESC:Events for Eurandom
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20140101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20140326
DTEND;VALUE=DATE:20140329
DTSTAMP:20211205T010853
CREATED:20190715T103039Z
LAST-MODIFIED:20190715T103346Z
UID:2803-1395792000-1396051199@www.eurandom.tue.nl
SUMMARY:YES VII: "Stochastic Optimization and Online Learning"
DESCRIPTION:Summary\nStochastic optimization embodies a collection of methodologies and theory that aim at devising optimal solutions to countless real-world inference problems\, particularly when these involve uncertain and missing data. At the heart of stochastic optimization is the idea that many deterministic optimization problems can be addressed in a more powerful and convenient way by introducing intrinsic randomness in the optimization algorithms. Furthermore\, this generalization gives rise to a set of techniques that are well suited for settings involving uncertain\, incomplete\, or missing data. Online (machine) learning is concerned with learning and prediction in a sequential and online fashion. In many settings the goal of online learning is the optimal prediction of a sequence of instances\, possibly given a sequence of side-information. For example\, the instances might correspond to the daily value of a financial asset or the daily meteorological conditions\, and one wants to predict tomorrow’s value of the asset or weather conditions. Interestingly\, it is possible to devise very powerful online learning algorithms able to cope with adversarial settings\, meaning that powerful adversary can generate a sequence of instances that attempts to “break” the algorithms strategy. However\, one can show that these algorithms must necessarily incorporate a randomization of the predictions\, and can be often casted as stochastic optimization algorithms. One of the goals of this workshop is to make such connections between online learning and stochastic optimization more transparent. \nParticularly important in the present workshop is the quantification of the performance of a given online stochastic optimization/online learning procedure. This challenging question requires several ingredients to be answered adequately; in particular one needs to develop a proper optimality framework. Here parallels with modern statistical theory emerge\, as notions such as consistency\, convergence rates and minimax bounds\, common in statistical theory\, all have parallels in stochastic optimization and online learning. Therefore there is plenty of room for cross-fertilization between all these fields\, which is the main motivation for this workshop. \nThe aim of the workshop “Stochastic Optimization and Online Learning” is to introduce these broad fields to young researchers\, in particular Ph.D. students\, postdoctoral fellows and junior researchers\, who are interested and eager to tackle new challenges in the fields of stochastic optimization and online learning. \nSponsors\n \nOrganizers\n\n\n\nRui Castro\nTU Eindhoven\n\n\nEduard Belitser\nVU Amsterdam\n\n\n\nTutorial Speakers\n\n\n\nNicolò Cesa-Bianchi\nUniversità degli Studi de Milano\n\n\nFrancis Bach\nINRIA - Laboratoire d'Informatique de l'Ecole Normale Superieure\, Paris\n\n\nAnatoli Juditsky\nLaboratoire Jean Kuntzmann\, Université Joseph Fourier\, Grenoble\n\n\n\n \nProgramme\n\n\n\n\n\n\n\nWednesday (March 26th) \n\n09:20 - 09:50 Coffee and Registration\n09:50 - 10:00 Opening Remarks\n10:00 - 11:00 Nicolò Cesa-Bianchi\n11:00 - 11:30 Coffee Break\n11:30 - 12:30 Nicolò Cesa-Bianchi\n12:30 - 14:00 Lunch\n14:00 - 15:00 Francis Bach\n15:00 - 15:30 Coffee Break\n15:30 - 16:10 Gábor Bartók - Contributed talk\n18:30 - Workshop Dinner\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\nThursday (March 27th) \n\n10:00 - 11:00 Nicolò Cesa-Bianchi\n11:00 - 11:30 Coffee Break\n11:30 - 12:30 Francis Bach\n12:30 - 14:00 Lunch\n14:00 - 15:00 Francis Bach\n15:00 - 15:30 Coffee Break\n15:30 - 16:10 Talat Nazir - Contributed talk\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFriday (March 28th) \n\n09:00 - 10:00 Anatoli Juditsky\n10:00 - 10:30 Coffee Break\n10:30 - 11:30 Anatoli Juditsky\n11:30 - 12:00 Coffee Break\n12:00 - 13:00 Anatoli Juditsky\n13:00 - 14:30 Closing of the workshop and lunch\n\n\n\n\n\n\n\n\n\nAbstracts\nFrancis Bach \nBeyond stochastic gradient descent for large-scale machine learning\nMany machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data\, where there are many observations ("large n") and each of these is large ("large p"). In this setting\, online algorithms such as stochastic gradient descent which pass over the data only once\, are usually preferred over batch algorithms\, which require multiple passes over the data. Given n observations/iterations\, the optimal convergence rates of these algorithms are O(1/\sqrt{n}) for general convex functions and reaches O(1/n) for strongly-convex functions. \nIn this tutorial\, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior\, both in theory and practice: in the ideal infinite-data setting\, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions\, while in the practical finite-data setting\, an appropriate combination of batch and online algorithms leads to unexpected behaviors\, such as a linear convergence rate for strongly convex problems\, with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux\, Eric Moulines and Mark Schmidt) \nPresentation \nAnatoli Juditsky \nDeterministic and stochastic first order algorithms of large-scale convex optimization\nSyllabus:\n1. First Order Methods of proximal type Nonsmooth Black Box setting: deterministic and stochastic Mirror Descent algorithm (MD). Convex-Concave Saddle Point problems via MD.\n2. Utilizing problem's structure: Mirror Prox algorithm (MP). Smooth/Bilinear Saddle Point reformulations of convex problems: calculus and examples. The Mirror Prox algorithm. Favorable geometry domains and good proximal setups.\n3. Conditional Gradient type First Order Methods for problems with difficult geometry: Convex problems with difficult geometry. Smooth minimization\, norm-regularized smooth minimization. Nonsmooth minimization. \nPresentation \n \n \n
URL:https://www.eurandom.tue.nl/event/yes-vii-stochastic-optimization-and-online-learning/
LOCATION:MF 11-12 (4th floor MetaForum Building\, TU/e)
END:VEVENT
END:VCALENDAR