**Project Heads**

*Carsten Gräser, Ralf Kornhuber, Christof Schütte*

**Project Members**

Prem Anand Alathur Srinivasan (FU)

**Project Duration**

12.02.2019 – 11.02.2022

**Located at**

FU Berlin

This methodologically oriented project is devoted to the development and numerical analysis of learning algorithms for (typically large) data sets exploiting a priori knowledge in terms of an auxiliary physical model formulated as partial differential equation.

The supervised learning problem to learn a function from some given data is one of the most fundamental problems in machine learning. In recent years huge progress has been made in using deep neural networks as ansatz functions for such a learning problem. However, the mathematical theory for learning neural networks is still far from being complete and there are little rigorous results on convergence or even error estimates. Furthermore, in the problem of learning of a function from e.g. pure point data neglects potential knowledge about underlying physical processes. On the other hand the big success of neural networks in supervised learning also stimulated their usage as ansatz “space” for the solution of partial differential equations (PDEs), e.g. in the so called deep Ritz method. Again little is known about convergence or error estimates of the method.

The project aims are threefold: On the one hand tries to combine deep learning approaches with physical background knowledge to improve accuracy and reliability of the learning procedure. On the other hand it aims at contributing to the theoretical foundation of deep learning by investigating convergence and error estimates. Finally the project uses techniques developed for the numerical treatment of PDEs to develop new efficient learning algorithms.

The project considers learning problems with physical background knowledge given in terms of a PDE. The PDE serves as a model for the underlying physical process and may very well be inexact. By augmenting a loss functions for given data with such an auxiliary PDE model given in terms of an energy functional, we consider hybrid learning problems involving data and a PDE. In this setting the project investigates the proper coupling of the terms as well as error analysis in terms of the given data, the inexactness of the PDE model, the discretization using neural networks, and the algebraic solution strategy. The basis for this are variational and stochastic techniques. For the actual learning problem the project investigates how multilevel techniques developed for classical Galerkin discretizations of PDEs can be extended to neural network discretizations.

The project developed a variational approach for the combination of data-based loss functional and and auxiliary PDE model. For the resulting problem we derived a multiplicative error estimate which shows that accuracy can be improved by providing more information of the problem, either by adding more or by providing a more exact PDE model. The result especially shows that incorporating a priory knowledge in terms of a PDE can be beneficial. On a more fundamental level one can see that incorporating a PDE regularizes the otherwise ill-posed learning problem. By proving a nonlinear Céa-Lemma, we can also bound the discretization error by the best approximation error for classical as well as for neural network discretizations. While this is only a partial result (because it neglects the algebraic error from inexact or local minimization), it still provides valuable insight in the properties of such discretizations.

**Selected Publications **

Carsten Gräser. and Prem Alathur Srinivasan (2020) *Error bounds for PDE-regularized learning. ** *arXiv:2003.06524 . pp. 1-20. (Submitted)

Hanna Wulkow (2020) *R egularization of Elliptic Partial Differential Equations Using Neural Networks. Master thesis, *FU Berlin

Hanna Wulkow and Tim Conrad and Natasa Djurdjevac Conrad and Sebastian A. Müller and Kai Nagel and Christof Schütte (2021) * Prediction of Covid-19 spreading and optimal coordination of counter-measures: From microscopic to macroscopic models to Pareto fronts. *PLOS One 16 (4), DOI 10.1371/journal.pone.0249676

Margarita Kostre and Christof Schütte and Frank Noé and Mauricio del Razo Sarmina (2021)

* Coupling Particle-Based Reaction-Diffusion Simulations with Reservoirs Mediated by Reaction-Diffusion PDEs. *Multiscale Modeling & Simulation 19 (4), pp. 1659 — 1683, DOI 10.1137/20M1352739.

**Selected Pictures
**

Error of a neural network discretization of a PDE-regularized learning problem over amount of data. Computational results and theoretical prediction.

Please insert any kind of pictures (photos, diagramms, simulations, graphics) related to the project in the above right field (Image with Text), by choosing the green plus image on top of the text editor. (You will be directed to the media library where you can add new files.)

(We need pictures for a lot of purposes in different contexts, like posters, scientific reports, flyers, website,…

Please upload pictures that might be just nice to look at, illustrate, explain or summarize your work.)

As Title in the above form please add a copyright.

And please give a short description of the picture and the context in the above textbox.

Don’t forget to press the “Save changes” button at the bottom of the box.

If you want to add more pictures, please use the “clone”-button at the right top of the above grey box.