AA5 – Variational Problems in Data-Driven Applications

Project

AA5-8

Convolutional Brenier Generative Networks

Project Heads

Hanno Gottschalk, Gabriele Steidl

Project Members

Ségolène Martin

Project Duration

01.01.2024 − 31.12.2025

Located at

TU Berlin

Description

Generative learning has emerged as a significant field in machine learning, aiming to create new samples, such as images, that closely resemble a given target distribution. This involves learning a transformation from an initial source distribution, typically Gaussian noise, to the target data distribution. According to Brenier’s theorem, for absolutely continuous measures, there exists a unique optimal transport map that minimizes the Wasserstein-2 distance, characterized as the gradient of a unique convex function called the Brenier potential. While most generative networks focus on approximating this optimal transport map, recent approaches have proposed learning the Brenier potential directly. However, this method currently relies on Input Convex Neural Networks (ICNNs), which are inherently convex and dense in the set of convex continuous functions. Due to their fully-connected architecture, ICNNs are computationally inefficient, limiting the broader application of Brenier networks.

 

 

Research Objectives

 

This project aims to design, mathematically study, and apply deep convolutional neural networks for approximating Brenier-like maps. Specifically, it will develop a framework within the context of Generative Adversarial Networks (GANs) to learn convex potential functions using deep neural networks with specific activation functions that ensure the required regularity and convexity.

 

Expected Outcomes

 

  • Establish theoretical approximation results for convolutional generative models, particularly GANs and Brenier GANs with invariant potentials, and extend the concept of equivariance to the disintegration of transport plans.
  • Improve contemporary deep learning methods in terms of efficiency based on the mathematical findings.
  • Apply Brenier GANs to produce negative examples or counterfactuals in out-of-distribution detection, with applications in enhancing the safety of AI-driven perception in automated driving.
 

Figure: Example of 3 digits generated by a Brenier-GAN trained on MNIST

External Website

https://segolenemartin.github.io/

Selected Publications