Project Heads
Claudia Schillings
Project Members
Philipp Wacker (until 09/22), Mattes Mollenhauer (02/23 until 02/24), Dana Wrischnig (from 05/24), Ilja Klebanov (from 05/24)
Project Duration
01.03.2022 − 31.12.2024
Located at
FU Berlin
The project aims to combine innovative machine learning techniques with Kalman-based filtering approaches for inverse problems. We will explore subsampling strategies and surrogate-enhanced variants to improve performance in high-dimensional data spaces and complex forward models. Additionally, we will develop strategies to incorporate constraints on parameters by linking them to the Bayesian approach to inverse problems [5].
This project will follow two major workstreams:
(1) Theoretical Foundations of Infinite-Dimensional Inference: We address the theory of (sub)-Gaussian measures in Hilbert- and Banach spaces [2], seeking to combine these with recent advancements in infinite-dimensional estimation [1, MATH+ IN-8]. Our goal is to lay the groundwork for generalized results that complement ideas from the second workstream.
(2) Investigation of Specific Numerical Schemes: We will examine data-driven methodologies in Bayesian analysis, inverse problems, and numerical analysis to understand their behavior in high-dimensional settings [4,3].
We investigate the learning of linear operators between Hilbert spaces, treating it as a high-dimensional least squares regression problem. Despite the non-compact nature of the forward operator, we establish equivalence with a known compact inverse problem in scalar response regression. This enables us to derive dimension-free learning rates using techniques from kernel regression and concentration of measure. These results provide the theoretical foundation within workstream (1) above.
We examine the tail behaviour of the norm of subgaussian vectors in Hilbert spaces, using a trace class operator to control moments. This leads to new bounds analogous to Hoeffding-type inequalities and deviation bounds for positive random quadratic forms. We apply this to establish variance bounds for the regularization of statistical inverse problems, contributing to our project’s workstream (1) above.
We introduce a method to extract higher-order differential information from particle ensembles termed ensemble-based gradient inference (EGI). This enhances optimization and sampling methods, such as Consensus-based optimization and Langevin samplers. Our numerical studies show improved performance in exploring complex, multimodal settings, which aligns with our project’s aim of developing efficient numerical schemes for high-dimensional problems within workstream (2) above. The code for the numerical examples can be found in this Github repository.
Publications within project
[1] Mollenhauer, M., Mücke, N., & Sullivan, T. J. (2022). Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem. arXiv preprint arXiv:2211.08875.
[2] Mollenhauer, M. & Schillings, C (2023). On the concentration of subgaussian vectors and positive quadratic forms in Hilbert spaces. arXiv preprint arXiv:2306.11404.
[3] Schillings, C., Totzeck, C., & Wacker, P. (2022). Ensemble-based gradient inference for particle methods in optimization and sampling. arXiv preprint arXiv:2209.15420.
Related Publications
[4] Blömker, D., Schillings, C., Wacker, P., & Weissmann, S. (2022). Continuous time limit of the stochastic ensemble Kalman inversion: Strong convergence analysis. SIAM Journal on Numerical Analysis, 60(6), 3181-3215.
[5] M. Dashti and A. M. Stuart (2017) The Bayesian approach to inverse problems. In Handbook of uncertainty quantification, Springer.
Related Pictures