Electron microscopy of HeLa Cell in which sub cellular structures are seen in grey scale
Author Wendy Bautista MD Ph.D. Physician Scientist
Barrow Neurological Institute, Phoenix Children’s Hospital
Author Mones Abu Asab Ph.D. Electron Microscopy Core Director
National Eye Institute NIH
Abstract

Deep Learning-based Segmentation of the Mitochondria with Varying Morphological Phenotypes in the EM Serial Sections

Researchers at Barrow Neurological Institute, Phoenix Children’s Hospital are using transmission electron microscopy imaging and ZEISS arivis Pro (formerly Vision4D) to gain an understanding of how the mitochondria in the brain tissue are affected by hypoxic conditions.

Automated segmentation of the transmission electron microscopy images remains a challenge. ZEISS arivis Pro is specifically designed to allow customers to easily apply the Deep Learning models on the images and run the subsequent analysis within the established workflow tailored to the specific analysis needs. This article describes the best practices in mitochondria EM image analysis: creating the ground truth annotations, running the inference (predictions) in ZEISS arivis Pro, and utilizing its generous toolset for downstream analysis.

Key learnings:

  • Overcoming low contrast and greyscale challenges with ground-truth preparation.
  • Saving significant time by training a Deep Learning model.
  • Automation and batch processing with out programming.
Preparing image set for AI model training

Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital. Hippocampus tissue section, transmission electron microscopy. The objects in yellow are the manually segmented mitochondria from both control and swollen phenotypes.

Preparing the Image set for Training the Deep Learning Model

When preparing for the Deep Learning (DL) training, the critical step is creating the ground-truth annotations. This is often achieved by manually annotating the regions of interest, thereby creating the objects. In total, for this research project, 30 TEM serial sections were used with 309 mitochondria objects, annotated manually with the drawing tool (Vision4D ver. 3.6 was used). Both mitochondria phenotypes, normal and swollen, were pooled into one class for the DL training.

These manual annotations were used to train the Deep Learning model for semantic segmentation, also known as pixel classification. Specifically, we have used the U-net model, with the architecture very similar to the original publication (O. Ronneberger et al., 2016). Prior to the training, the images and the annotations were downscaled two-fold of the raw image size using the bicubic interpolation method to facilitate the DL training time and match the feature size. In the following step, the grayscale images and the binary ground-truth images were augmented by applying rotations, reflections, and elastic transformations. The U-net model was trained with the custom-made script* for 50 epochs and the 42nd epoch had the highest accuracy score and was selected for running the inference (predictions) to segment the mitochondria. During the last training step, the model was converted to the ONNX format to run the inference (predictions) in ZEISS arivis Pro.

* For more details, contact the arivis team.

Mitochondria EM image segmentation

Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital. Hippocampus tissue section, transmission electron microscopy. Manually segmented mitochondria (yellow objects) and the DL inference results (cyan objects) are overlayed to illustrate the accuracy of the predictions.

Applying the Deep Learning-based Segmentation

The Deep Learning model then was applied to the whole dataset in ZEISS arivis Pro for automated segmentation. We can apply the DL model in the software on the data with the same resolution in the pipeline and scale the resulting objects back to the original size. The pipeline for segmenting the mitochondria was run at 50% scale compared to the original images. Object filtering and classification by the phenotype and export of the numerical data into the Excel format is done automatically within the same pipeline. This enables applying the entire workflow in a Batch mode on a set of images.

Mitochondria phenotypes classified

Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital. Hippocampus tissue section, transmission electron microscopy. Manually segmented mitochondria (yellow objects) and the DL inference results (cyan objects) are overlayed to illustrate the accuracy of the predictions.

Mitochondria Classification in Electron Microscopy Images

Classifying the Mitochondria Phenotypes

ZEISS arivis Pro has an extensive list of quantitative features, that characterize each object. In addition, we have the possibility to create custom features or import them from external sources. In other to access the quantitative distribution of the mitochondria phenotypes, we have created a custom object feature, which computes the ratio of the mean intensity of each object to its volume.

This ratiometric function reflects and emphasizes the differences in the mitochondria phenotype with high accuracy. It was used for classifying the objects into the ‘Control’ and ‘Swollen’ groups. For visualization purposes, each object was color-coded according to the value of the mitochondria phenotype custom feature.

Mitochondria EM image analysis results in graphs

Original imaging data was kindly provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital.

How to Succeed with Deep Learning Training and Analysis

Challenges and Advantages:

  • Automated unbiased image segmentation of the complex electron microscopy images is not a straightforward analysis due to the low contrast inherent to the EM images. Most commercial software only offers the possibility to run the segmentation using the traditional image analysis algorithms.
  • ZEISS arivis Pro allows applying your neuronal network directly in the pipeline and let it do the analysis work on your imaging datasets. Typical tasks are reduced from weeks to hours compared to manual execution.
  • By incorporating the DL inference in your pipeline combined with the already existing tools into customized workflows, we can also process many datasets in a Batch mode without the need for programming knowledge.

How to plan your workflow:

  • This type of analysis is ideal to recognize and segment the objects with the complex morphological patterns.
  • When preparing the data for the manual ground-truth annotations, make sure that the images from all the experimental conditions and variance between the experiments are included in the training set.
  • The precision of the manual annotation is crucial for training the model with the high accuracy prediction power.

Mitochondria Segmentation and Classification Image Analysis Pipeline

  • Training

    Paint the objects of interest on a set of images to create ground truth. Train the Deep Learning model. The robust model will recognize the objects of interest in a variety of conditions.

  • Segmentation

    Add the Deep Learning Segmentation operator in the ZEISS arivis Pro pipeline and select your model to create the mitochondria segments.

  • Classification

    Compute and score the mitochondria phenotypes with the custom ratiometric feature and group the objects accordingly.

  • Export

    Add exporting operators if you want to automatically generate an Excel file with results and run the ZEISS arivis Pro pipeline over the whole data set.

Create Annotations for Automated Deep Learning Segmentation

Conclusion

In this case study, the researchers from the Barrow Neurological Institute, Phoenix Children’s Hospital used the pre-trained Deep Learning model to segment all the mitochondria objects on the hippocampal tissue section. Due to the exposure to the hypoxic conditions, the mitochondria in these tissue samples have varying morphology: some appear normal, and some have ‘swollen’ morphology. This posed an additional challenge since we aimed at creating one Deep Learning model to recognize all mitochondria phenotypes in a single step.


Share this page