Project Heads
Péter Koltai
Project Members
Mattes Mollenhauer
Project Duration
01.01.2022 – 31.03.2023
Located at
FU Berlin
We proposed an approach to spectral regularization algorithms for kernel-based supervised learning with infinite-dimensional response variables. Recent research shows that this scenario enjoys widespread practical use. However, virtually no results existed due to the mathematical complexity compared to thefinite-dimensional learning settings investigated so far.
A powerful approach to derive convergence rates for supervised learning problems can be formulated in terms of spectral regularization techniques for inverse problems (Caponnetto and De Vito, 2007). However, virtually all available literature on this topic requires the underlying linear operator describing the inverse problem to be compact. However, recent applications motivate to investigate the scenario of infinite-dimensional supervised learning problems (Singh et al., 2019; Mollenhauer and Koltai, 2020). We have shown that in this case, the underlying linear operator is typically not compact, which may have a drastical impact on the behaviour of the used regularization techniques.
Project outcomes
The goal of this project was to derive convergence results which apply to the infinite-dimensional scenario by combining techniques from spectral perturbation, statistical inverse problems and concentration of measure in Hilbert/Banach spaces. In the course of this project, general convergence rates were obtained by Mollenhauer, Mücke and Sullivan (2022) for a variety of learning algorithms in an abstract regression setting which includes kernel regression. Convergence rates for the special setting of the so-called kernel conditional mean embedding were obtained by Li, Meunier, Mollenhauer and Gretton (2022). The latter work includes convergence results for the case that the model is misspecified and provide a complementary lower bound on the rates, proving that the obtained rates are indeed optimal.
Project Webpages
Selected Publications
Blanchard, G. and Mücke, N. (2018). Optimal rates for regularization of statistical inverse learning problems. Foundations of Computational Mathematics, 18:971–1013.
Caponnetto, A. and De Vito, E. (2007). Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331–368.
Li, Z., Meunier, D., Mollenhauer, M., & Gretton, A. (2022). Optimal rates for regularized conditional mean embedding learning. In Advances in Neural Information Processing Systems 35.
Mollenhauer, M., Mücke, N., & Sullivan, T. J. (2022). Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem. arXiv preprint arXiv:2211.08875.
Mollenhauer, M. and Koltai, P. (2020). Nonparametric approximation of conditional expectation operators. arXiv preprint arXiv:2012.12917.
Park, J. and Muandet, K. (2020). A measure-theoretic approach to kernel conditional mean embeddings. In Advances in Neural Information Processing Systems 33.
Singh, R., Sahani, M., and Gretton, A. (2019). Kernel instrumental variable regression. In Advances in Neural Information Processing Systems 32.
Selected Pictures
Please insert any kind of pictures (photos, diagramms, simulations, graphics) related to the project in the above right field (Image with Text), by choosing the green plus image on top of the text editor. (You will be directed to the media library where you can add new files.)
(We need pictures for a lot of purposes in different contexts, like posters, scientific reports, flyers, website,…
Please upload pictures that might be just nice to look at, illustrate, explain or summarize your work.)
As Title in the above form please add a copyright.
And please give a short description of the picture and the context in the above textbox.
Don’t forget to press the “Save changes” button at the bottom of the box.
If you want to add more pictures, please use the “clone”-button at the right top of the above grey box.