Emerging Field 1 “Extracting Dynamical Laws from Complex Data”
This Emerging Field aims to develop novel methods, through combining machine learning and mathematical process simulation, which are able to derive effective dynamical laws from data. As a result, we anticipate to obtain models which generate understanding and physical insights, and can be simulated efficiently, resulting in unprecedented speed-ups compared to the often intractable classical direct simulation methods such as for the time-dependent Schrödinger equation in quantum processes.
This program faces several challenges. First, current machine learning methods are often black-box approaches, which are not sufficient for scientific simulation and measurement data. And, second, current machine learning methods study mostly static, stationary, and complete data, however scientific data is often dynamic, nonstationary, incomplete, multimodal, and multiscale.
Consequently, the projects within this Emerging Field focus either on the development of a theory for machine learning, in particular, deep learning, or approach this problem complex from the application side. Moreover, the projects are typically characterized by a high degree of interdisciplinarity as well as by utilizing a combination of numerous mathematical areas.
Scientists in Charge: Klaus-Robert Müller, Sebastian Pokutta, Markus Reiß
-
- EF1-10: Kernel Ensemble Kalman Filter and Inference
Péter Koltai, Nicolas Perkowski
- EF1-11: Quantum Advantages in Machine Learning
Klaus-Robert Müller, Jens Eisert
- EF1-12: Learning Extremal Structures in Combinatorics
Sebastian Pokutta, Tibor Szabó
- EF1-14: Sparsity and Sample-Size Efficiency in Structured Learning
Sebastian Pokutta
- EF1-15: Robust Multilevel Training of Artificial Neural Networks
Michael Hintermüller, Carsten Gräser
- EF1-16: Quiver Representations in Big Data and Machine Learning
Alexander Schmitt
- EF1-17: Data-Driven Robust Model Predictive Control under Distribution Shift
Jia-Jie Zhu, Michael Hintermüller
- EF1-18: Manifold-Valued Graph Neural Networks
Christoph von Tycowicz, Gabriele Steidl
- EF1-19: Machine Learning Enhanced Filtering Methods for Inverse Problems
Claudia Schillings
- EF1-20: Uncertainty Quantification and Design of Experiment for Data-Driven Control
Claudia Schillings
- EF1-21: Scaling up Flag Algebras in Combinatorics
Sebastian Pokutta, Christoph Spiegel
- EF1-22: Bayesian Optimization and Inference for Deep Networks
Claudia Schillings, Vladimir Spokoiny
- EF1-23: On a Frank-Wolfe Approach for Abs-Smooth Optimization
Sebastian Pokutta, Andrea Walther, Zev Woodstock
- EF1-24: Expanding Merlin-Arthur Classifiers Interpretable Neural Networks through Interactive Proof Systems
Sebastian Pokutta, Stephan Wäldchen
- EF1-25: Wasserstein Gradient Flows for Generalised Transport in Bayesian Inversion
Martin Eigel, Claudia Schillings, Gabriele Steidl
Successfully completed projects:
-
- EF1-1: Quantifying Uncertainties in Explainable AI
Gitta Kutyniok, Klaus-Robert Müller, Wojciech Samek
- EF1-2: Quantum Kinetics
Klaus-Robert Müller, Frank Noé
- EF1-3: Approximate Convex Hulls With Bounded Complexity
Michael Joswig, Klaus-Robert Müller
- EF1-4: Extracting Dynamical Laws by Deep Neural Networks: A Theoretical Perspective
Jens Eisert, Frank Noé, Barbara Zwicknagl
- EF1-5: On Robustness of Deep Neural Networks
Christian Bayer, Peter Karl Friz
- EF1-6: Graph Embedding for Analyzing the Microbiome
Tim Conrad, Stefan Klus, Gregoire Montavon
- EF1-7: Quantum Machine Learning
Jens Eisert, Klaus-Robert Müller
- EF1-8: Incorporating Locality into Fast(er) Learning
Nicole Mücke
- EF1-9: Adaptive Algorithms through Machine Learning: Exploiting Interactions in Integer Programming
Ambros Gleixner, Sebastian Pokutta
- EF1-13: Stochastic and Rough Aspects in Deep Neural Networks
Christian Bayer, Peter-Karl Friz