Project Heads
Martin Eigel, Claudia Schillings, Gabriele Steidl
Project Members
Robert Gruhlke
Project Duration
01.01.2023 − 31.12.2024
Located at
FU Berlin
Generalised gradient Wasserstein flows connect measure transport and interacting particle systems. The project combines the analysis of efficient numerical methods for gradient flows, associated SDEs and compressed functional approximations in the context of Bayesian inversion with parametric PDEs and image reconstruction tasks.
External Website
Related Publications
Related Media
Density trajectory from unimodal standard normal Gaussian to an non-symmetric multimodal density of non Gaussian-mixture type. The trajectory is defined through an Ornstein-Uhlenbeck process and its time-reverse counterpart process. The drift term in the reverse process is defined upon the score, which is obtained through solution of the Hamilton-Jacobi-Bellman equation. The latter is obtained through Hopf-Cole transformation of the Fokker-Planck equation associated to the forward Ornstein-Uhlenbeck process.
Many objective functions of minimization problems can be expressed in terms of the expectation of a loss. A common solution strategy is to minimize a corresponding empirical mean estimate. Unfortunately, the deviation of the exact and empirical minimizer then depends on the sample size. As an alternative, we empirically project the gradient of the exact objective function onto the tangent space. Descent is ensured by optimal weighted least squares approximation within an alternating minimization scheme.
The main goal of this project is the minimization of objective functions defined on high-dimensional Euclidean tensor spaces. For the case, that the objective function allows for cheap evaluations on the Riemannian manifold of tensors given in hierachical tree-based low-rank format, we construct a cheap approach to obtain Riemannian Gradients based on Automatic differentiation. Examples of such type include (empirical) regression or completion problems.
This approach in turn overcomes the curse of dimensionality arising when computing Riemannian gradients as projection of (non traceable) Euclidean gradients to the tangential space.
Low-rank tensor formats define a non-linear approximation class of tensors, in particular they are multilinear and are a subclass of tensor networks, multigraphs with edge identities with additional dangling edges representing the indices of the full tensor.
This type of topology allows for efficient (sub)-contractions required to define local projections that define the degrees of freedom in the Riemannian gradient.
Quasi-Monte Carlo meets kernel cubature
The main goal of this project is to develop a kernel cubature technique based on the concept of optimal sampling. Optimal sampling can be used to define empirical projections on linear spaces, with error bounds bounded up to a constant by the best-approximation error. While for general L2-functions such bounds hold in expectation, in the case of functions being element in some RKHS, these bounds hold almost surely.
The latter case applies for the analysis of high-order Quasi-Monte Carlo methods, where the kernel of the RKHS is known. Hence refine the analysis of best-approximation in these RKHS and derive a almost surely convergent quadrature that yields optimal rates.
Neural JKO Sampling with Importance Correction
The main goal is the numerical approximation of Wasserstein gradient flows, using the formalism of generalized minimizing movements or Jordan-Kinderlehrer-Otto (JKO) scheme. For this we first discretize the JKO-scheme and then utilize Continuous Normalizing Flows to approximate the proximal mapping with respect to the previous obtained distribution. However Wasserstein gradient flows are known to behave poorly in the case of multi-model distribution. Hence the obtained composite of transport maps is enriched by layers of rejection and resampling steps based on importance reweighted rejection.
One-shot manifold learning for design of experiment
Common approaches to solve design of experiment tasks suffer from bad sample complexity due to an underlying nested sampling mechanic. In this project we avoid the nested sampling drawback, through solving the associated optimization task with a one-shot method. For this we consider a model class to approximate the underlying physical model and minimize the optimization task for the optimal parameter due to the parallel updating of model class coefficients and design parameter.
Diffusion models with multiplicative noise and rotational invariant distributions
In this project we extend the view of Diffusion models for sample generation to stochastic SDEs with multiplicative noise. Here the multiplicative noise is defined upon linear combination of Brownian motions and skew-symmetric operators. This type of SDEs result from spatial discretization of the following SPDE that appear in the modeling of fluid dynamics. We analyze properties of the associated Fokker-Planck equations and show that any invariant measure must be rotational invariant in accordance to the underlying physical behavior.
Pathwise tensor train approximation of Hamilton-Jacobi-Bellman equations
In this project we define forward and reverse diffusion processes with drift terms defined through an unknown control function. Then, using techniques from stochastic optimal control, we aim to find the optimal control function which is known a-priori to be the score function but intractable. In order to do so, we propose to learn the control via policy iteration on the path space by solving a backward stochastic differential equation (BSDE). In particular we utilize manifold optimization through low-rank tensor formats to represent the control function on sample trajectories. The latter is solved through minimizing a regression problem.