AA5 – Variational Problems in Data-Driven Applications


AA5-3 (was EF1-18)

Manifold-Valued Graph Neural Networks

Project Heads

Christoph von Tycowicz, Gabriele Steidl

Project Members

Martin Hanik

Project Duration

01.01.2022 − 31.12.2023

Located at

FU Berlin


Geometry-aware, data-analytic approaches improve understanding and assessment of pathophysiological processes. In this project, we derive a new theoretical framework for deep neural networks that can cope with geometric data and apply it for classification of musculoskeletal illness from both shape and movement patterns. As part of the process, we develop the necessary software in Python and make it accessible through the Morphomatics library. The source code is freely available on GitHub, and there are tutorials that can be run live on Binder.


In an application of graph neural networks to manifolds, we studied how well the latter can predict cognitive measures from brain connectomes. The latter are graphs with edge weights determined by correlations in brain activity, and they can be thought of as symmetric positive-definite matrices, which constitute a cone-like manifold. In our work, we could show that (a) graph neural networks outperform state-of-the-art in the prediction of two common cognitive scores and (b) these results can sometimes even be improved by only using a learned representative subset of samples for training.


To generalize layers for manifold-valued features, we investigated diffusion processes as they allow for information transfer in embedded graphs. In the figures below, you can see a graph that is embedded in the two-dimensional sphere and the flow of (other) vertices of a graph under a diffusion-like process.


In “Predicting Shape Development: A Riemannian Method,” we investigated how the future shape of an anatomical object can be predicted using general adversarial networks.


Based on the diffusion, we furthermore developed a convolutional GNN layer that is equivariant under permutations of the nodes and isometries of the manifold. The same properties possesses our novel generalized multilayer perceptron. Both layers are elementary building blocks for GNNs with strong inductive biases, which have been shown to be of fundamental importance in many learning tasks. We introduced both layers in our article “Manifold GCN: Diffusion-based Convolutional Neural Network for Manifold-valued Graphs.” Networks based on them showed strong performance when classifying abstract graphs in Hyperbolic space and Alzheimer disease from hippocampus meshes.

External Website

Related Publications