EF3 – Model-Based Imaging

Project

EF3-2

Model-Based 4D Reconstruction of Subcellular Structures

Project Heads

Peter Hiesinger, Max von Kleist, Steffen Prohaska, Martin Weiser

Project Members

David Knötel (ZIB, until 03/21)

Project Duration

01.01.2019 – 31.03.2021

Located at

ZIB

Description

The brain development of flies (Drosophila) is being observed through 4D (3D + time) 2-photon microscopy. The reconstruction of subcellular structures, here filopodia sprouting from a growth cone, is a challenging task currently done semi-automatically with huge manual effort [1]. We aim at reducing the required human labour by using quantitative models of growth cone and filopodia geometry and dynamics [2] for a more robust and consistent algorithmic identification of subcellular structures. Some of the used methods comprise Bayesian inference, optimization, segmentation using convolutional neural networks, and stochastic modelling.

Data description

  • Brain development of flies (Drosophila) is being observed through 4D (3D + time) 2-photon microscopy
  • 60 timesteps over 1h including multiple growth cones
  • Filopodia are attached to a growth cone

Goal

Dramatic increase of image analysis throughput by model-based filopodia segmentation and tracking compared to semi-automatic reconstruction algorithm previously created at ZIB.

Combining microscopy data and growth dynamics models into a model based filopodia reconstruction loop.

 

Main framework

The goal is to use Bayesian inference methods for filopodia reconstruction and tracking that can respect user-provided constraints. Here, the prior compares a potential filopodia model to previously known filopodial length dynamics while the likelihood term compares the model to the dataset at hand.

Stochastic models based on reconstructed data (using the semi-automatic algorithm) are available for filopodial growth and for synapse formation. In this project, only the growth dynamics are relevant and the conditional length distribution is modeled as a sum of Laplace distributions with an exponential vanishing probability (dependent on the length in previous timestep). For the likelihood computation, we first compute a filopodia probability field using a U-Net based Deep-CNN semantic segmentation method. The training data is created using a method based on region growing that transforms the reconstructions from the semi-automatic algorithm (given as piecewise linear lines in 3D space for one timestep) into voxelized data. In a following step, we fit filopodia curves into the probability fields by optimizing an active contour based energy functional.

In a side project, we aim at classifying the endpoints of filopodia. Again, there is training data available und we therefore apply a neural network for 3D image classification.

Feature detection with neural networks

Since filopodia occupy just a small volume fraction of the microscopy data, a soft segmentation of the data by assigning a probability to each voxel of being occupied by a filopodium was aimed for. Based on a U-Net CNN segmentation approach for brain tumors, a deep learning architecture for filopodia segmentation was implemented in TensorFlow and tested. Labeled training data was generated from available semi-automatic filopodia reconstructions by assigning an appropriate probability value to voxels within a tubular shape around the filopodia. Since the 4D data size exceeded the graphics card’s memory and a 4D-CNN implementation is not directly available in TensorFlow, the learning was performed on 3D image slices, using single time frames as data. The temporal correlation between sequential image frames is therefore lost. We tested exploiting this correlation by including the two neighboring frames into the network’s input, either as image data or in form of already segmented filopodia probability fields, but found no significant improvement. Overall, the U-Net segmentation provided reasonable results.

CNN feature detection scheme.

Labeled training data from semi-automatically reconstructed filopodia as input for neural network training.

A growth cone snapshot with filopodia. From left to right: data, manually segmented ground truth, and prediction result.

Training data quality has a major impact. In some network training runs, larger training error than validation error was observed. Accordingly, the training data created from the semi-automatically reconstructed filopodia was checked, and several instances of clearly erroneous reconstructions were detected. These caused wrong segmentation results. Correcting these cases manually improved the reconstruction quality considerably and eliminated the training anomalies.

Bulbous classification

Bulbous filopodia tips appear to be correlated with filopodia stabilization and the formation of synapses. Due to the immediate application relevance, we investigated filopodia classification by a CNN trained on random voxel patches of size 15x15x9 from 34 growth cones, 1994 of which contained bulbous tips. Data augmentation by isotropic resampling and rotation as well as a small learning rate proved necessary for good classification results. An accuracy of 93.5% was achieved.

Selected Publications

[1] F. Kiral, S. Dutta, G. Linneweber, S. Hilgert, C. Poppa, C. Duch, M. von Kleist, B. Hassan, and R. Hiesinger. Brain connectivity inversely scales with developmental temperature in drosophila. Cell Reports, 37:110145, 2021.
[2] F. Kiral, G. Linneweber, S. Georgiev, B. Hassan, M. von Kleist, and P. Hiesinger. Kinetic restriction of synaptic partner choice through filopodial autophagy. Nature Communications, 11:1325, 2020.
[3] M. Özel, A. Kulkarni, A. Hasan, J. Brummer, M. Moldenhauer, I.-M. Daumann, H. Wolfenberg, V. Dercksen, F. Kiral, M. Weiser, S. Prohaska, M. von Kleist, and P. Hiesinger. Serial synapse formation through filopodial competition for synaptic seeding factors. Developmental Cell, 50(4):447–461, 2019.

Selected Pictures

Transparent surface visualization of a growth cone and corresponding axon (on the left side) for one timestep. Filopodia reconstructions are represented as colored lines, e.g. the red line in the top part.

Please insert any kind of pictures (photos, diagramms, simulations, graphics) related to the project in the above right field (Image with Text), by choosing the green plus image on top of the text editor. (You will be directed to the media library where you can add new files.)
(We need pictures for a lot of purposes in different contexts, like posters, scientific reports, flyers, website,…
Please upload pictures that might be just nice to look at, illustrate, explain or summarize your work.)

As Title in the above form please add a copyright.

And please give a short description of the picture and the context in the above textbox.

Don’t forget to press the “Save changes” button at the bottom of the box.

If you want to add more pictures, please use the “clone”-button at the right top of the above grey box.