01.10.2020 − 30.09.2022
Classical stochastic approximation methods such as SGD in reproducing kernel Hilbert spaces are not able to exploit regions of different regularity of the target function. This slows down local convergence dramatically. To overcome this drawback, we propose to analyze localized SGD approaches.
 Nicole Mücke, Enrico Reiss, Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping, arXiv:2006.10840
 Nicole Mücke, Gergely Neu, Lorenzo Rosasco, Beating SGD Saturation with Tail-Averaging and Minibatching, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada,
 Nicole Mücke, Reducing training time by efficient localized kernel regression, Proceedings of Machine Learning Research, PMLR 89:2603-2610, 2019.
 Nicole Mücke, Stochastic Gradient Descent Meets Distribution Regression, Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) 2021, San Diego, California, USA. PMLR: Volume 130.
 Nicole Mücke, Enrico Reiss, Jonas Rungenhagen, Markus Klein, Data splitting improves statistical performance in overparametrized regimes, arXiv:2110.10956
Please insert any kind of pictures (photos, diagramms, simulations, graphics) related to the project in the above right field (Image with Text), by choosing the green plus image on top of the text editor. (You will be directed to the media library where you can add new files.)
(We need pictures for a lot of purposes in different contexts, like posters, scientific reports, flyers, website,…
Please upload pictures that might be just nice to look at, illustrate, explain or summarize your work.)
As Title in the above form please add a copyright.
And please give a short description of the picture and the context in the above textbox.
Don’t forget to press the “Save changes” button at the bottom of the box.
If you want to add more pictures, please use the “clone”-button at the right top of the above grey box.