Tobias Breiten, Carsten Hartmann
01.01.2021 − 31.12.2022
Using sampling methods to analyse and improve the parametrisation of neural networks is a relatively new idea. This project is devoted to the systematic development of stochastic differential equation (SDE) approximations for momentum enriched stochastic gradient schemes for deep neural networks and corresponding numerical algorithms. Specifically, we want to study the underdamped Langevin model that can be understood as the SDE counterpart of momentum enriched optimisation schemes. The underdamped Langevin model has a long tradition in statistical mechanics, and, from a control perspective, it offers more flexibility in altering the dynamics while preserving the invariant measure than the overdamped Langevin model that has been a standard tool in computational statistics or statistical learning for more than 20 years. The starting point for our analysis will be a controlled underdamped Langevin equation where the control is adapted to the filtration generated by the Brownian motion. Here the role of the control is twofold: it should accelerate the convergence to equilibrium at low temperature, while preserving the stationary distribution of the dynamics.
Stochastic modified equations with momentum
 P. Benner, T. Breiten, C. Hartmann and B. Schmidt. Model reduction of controlled Fokker-Planck and Liouville-von Neumann equations. Journal of Computational Dynamics, 7(1): 1-33, 2020.
 T. Breiten, C. Hartmann, L. Neureither and U. Sharma. Stochastic gradient descent and fast relaxation to thermodynamic equilibrium: a stochastic control approach. in preparation.
 T. Breiten, K. Kunisch and L. Pfeiffer. Control Strategies for the Fokker-Planck Equation. ESAIM: Control, Optimisation and Calculus of Variations 24(2) (2018), 741-763.
 C. Hartmann, L. Neureither, and U. Sharma. Coarse graining of nonreversible stochastic differential equations: Quantitative results and connections to averaging. SIAM Journal on Mathematical Analysis 52 (2020), 2689-2733.