From Image to Results - Scalable and Automated AI Image Analysis for Volume Electron Microscopy
Andrew Bergen
Author Andrew Bergen Application Specialist
arivis AG
Mariia Burdyniuk
Author Mariia Burdyniuk Application Specialist
arivis AG
Chris Zugates
Author Chris Zugates VP Operations US
arivis AG
Abstract

From Image to Results | Scalable and Automated AI Image Analysis for Volume Electron Microscopy

In this series "From Image to Results", explore various case studies explaining how to reach results from your demanding samples and acquired images in an efficient way. For each case study, we highlight different samples, imaging systems, and research questions.

In this sixth episode, we generate a cell-profiling workflow to segment and quantify cellular structures in 3D EM images.

Key Learnings:

  • How to generate a cell-profiling workflow to segment and quantify cellular structures in 3D EM images
  • How to quantify size and distributions for mitochondria, the nucleus, and nuclear pore regions

Case Study Overview

Sample

FIB-SEM of a high-pressure frozen HeLa cell

Task

Generate a cell-profiling workflow to segment and quantify cellular structures in 3D EM images

Results

Quantified size and distributions for mitochondria, the nucleus, and nuclear pore regions

System

ZEISS Crossbeam FIB-SEM

Software

ZEISS arivis Cloud, ZEISS arivis Pro

Introduction

Focused ion beam scanning electron microscopy (FIB-SEM) is a powerful imaging tool that achieves resolution under 10 nm. Though it produces highly detailed 3D image volumes, one drawback is that it is difficult to use standard image processing segmentation algorithms to detect many cellular structures of interest. This is largely because FIB-SEM highlights the entirety of the cell, generating images dense with cellular features, structural edges, and varying pixel combinations. Due to this difficulty, quantitative analysis of FIB-SEM data often relies on manual drawing of features of interest on 2D slices of a 3D image volume. Though this manual approach can be used to identify and reconstruct 3D objects from the image volume, it is tedious and time-consuming.

Previous work has focused on moving beyond this reliance on manual annotation to segment cellular structures from FIB-SEM image volumes. Notably, Gunes Parlakgül et al. (2022)1 recently took a deep-learning approach to identify mitochondria, nucleus, endoplasmic reticulum, and lipid droplets within FIB-SEM image volumes of liver cells. The resulting neural network models trained on these organelles resulted in step toward a comprehensive automated cell-profiler workflow for FIB-SEM image data, where the models could be used on multiple image volumes to efficiently quantify these organelles. Here, we take a similar deep-learning approach, with our goal being the development of a cell-profiling workflow that uses neural-network training and image analysis tools that are readily accessible to researchers and do not require coding.

In this study, we highlight the use the ZEISS arivis Cloud (previously known as APEER) online deep learning platform in combination with the ZEISS arivis Pro image analysis software toolkit to facilitate automated profiling based on both large and small structures within a FIB-SEM image of a HeLa cell. The original dataset was kindly provided by Anna Steyer and Yannick Schwab, EMBL, Heidelberg, Germany. We used ZEISS arivis Cloud to train neural networks that identify large organelles: mitochondria and the nucleus. Using ZEISS arivis Cloud, we manually draw a subset of the instances of these cellular features and train neural network models that successfully predict the remainder of the instances of cellular features within the FIB-SEM image volume. These arivis Cloud-trained models were then used first to infer mitochondria and the nucleus in arivis Vison4D. We then built ZEISS arivis Pro analysis pipelines to filter and improve the initial inferences into usable 3D segments.

Having defined mitochondria and nucleus, we used the measurement and visualization tools in ZEISS arivis Pro to examine the cytoplasmic organization of the HeLa cell. We noticed that, even though our images are of low resolution and quality compared to current state-of-the-art FIB-SEM, we could visualize the nuclear membrane and the nuclear pores and sought to develop a method to assess their distribution. Through a series of pipeline workflows in ZEISS arivis Pro we identify the nuclear pore complex (NPC) regions of the nuclear membrane. Our approach utilizes 3D operations that enable enhancement and segmentation of 3D spatially resolved NPC-associated objects in a way that would not be possible by segmenting each 2D plane separately within the image stack. These objects are reliable proxies for the NPCs in distribution analysis that be subsequently used to make 3D masks of individual NPCs for neural network trainings that respect the 3D nature of the data, which is necessary for accurate segmentation in cells.

Overall, this work highlights how the ZEISS arivis Cloud deep learning approach, when combined with the powerful 3D tools of ZEISS arivis Pro, enables 3D segmentation and measurements within FIB-SEM image sets.

Material and Methods

HeLa cell image set with a cropped cross section to visualize organelles, including the nucleus and mitochondria

Figure 1A: HeLa cell image set with a cropped cross section to visualize organelles, including the nucleus and mitochondria. The 3D image set of a HeLa cell was collected on a ZEISS Crossbeam Auriga 60. Original dataset kindly provided by Anna Steyer and Yannick Schwab, EMBL, Heidelberg, Germany.

Acquisition of the FIB-SEM HeLa Cell Image Volume

The following steps were performed to acquire the 3D FIB-SEM image set of the HeLa cell (Figure 1A). First, the HeLa cell specimen was high pressure frozen, freeze-substituted, and embedded in EPON resin. Next, 3D imaging was done on a ZEISS Crossbeam Auriga 60 where a focused ion beam was utilized to sequentially remove 8 nm thick layers of the specimen while the exposed block-face was simultaneously scanned (using ZEISS SeSi and EsB detectors).

Drawing objects on mitochondria

Figure 1B: The arivis Cloud platform was used to generate ground truth for training a neural-network model. This ground truth was created by drawing objects on mitochondria.

ZEISS arivis Cloud Deep Neural Network Training

The arivis Cloud platform uses a U-net convolutional neural network for deep learning training using a representative sub-set of the images and hand-drawn segments (‘ground-truth’) that have been provided by the user. The number of segment types sets the number of model classes. To generate a ground truth class, the user paints instances of a class within the image. Here, mitochondria and the nucleus were painted within the cell as individual classes to be used for the training (Figure 1B-C).

Our (sparse) training sets comprised just 73 planes from the top of the volume to the bottom, spaced every 10 planes. We also trained on a minimal number of ground truth objects for each model. The nucleus model training used 53 objects (mostly full nucleus and some partial annotations) and the mitochondria model training used 1,133 objects (the majority were full mitochondria). Occasionally all the objects were painted in a plane, but most of the time only sub-portions of planes were painted.

Note, we also attempted to segment nuclear pores and proxy objects directly via deep learning. The former fails because representing the many orientations of pores via manual 2D painting is not feasible. For the latter, we trained a network by using 688 objects. The result was highly biased in producing objects with more completeness in the XY orientation, which makes for unreliable 3D processing and measurements (due to errors in XZ and YZ).

Neural network models for the mitochondria and the nucleus were trained individually, resulting in two separate deep learning models (.CZANN). These models were then run on the entire image set and are applicable to any comparable image sets to predict all instances of each feature class.

Drawing objects on the nucleus

Figure 1C: The arivis Cloud platform was used to generate ground truth for training a neural-network model. This ground truth was created by drawing objects on the nucleus.

Image Segmentation and Analysis

Trained neural networks were used in ZEISS arivis Pro (v3.6) pipelines to separately infer the 2 classes. After creating initial predictions, a segment feature filter was used to remove small (orphan) objects that were not truly part of mitochondria or the nucleus, followed by segment morphology operators to close off disjoined parts at the boundaries and remove small surface artifacts.

A series of ZEISS arivis Pro Analysis Pipelines were developed into several workflows aimed at producing relevant measurements from the mitochondrial and nuclear 3D objects. Also, we developed workflows to utilize the original nuclear mask to segment the well-defined low-density structures positioned under the nuclear pores inside the nucleus. We assume these structures are nuclear pore baskets with the underlying nucleoplasm. Under-NPC-pockets were used as proxies to visualize and measure NPC distributions. We also derived 3D masks of nuclear pores for a clearer and more direct visualization and for future 3D deep learning.

Selected ZEISS arivis Pro Analysis Pipeline Operations from our workflow are listed in the following table:

ZEISS arivis Pro
arivis logo

ZEISS arivis Pro

ZEISS arivis Pro is a modular software for working with multi-channel 2D, 3D and 4D images of almost unlimited size, highly scalable and independent of local system resources. Many modern microscope systems such as high-speed confocal, light sheet / SPIM, super-resolution, electron microscopy or X-ray instruments can produce huge amounts of imaging data. ZEISS arivis Pro handles such datasets without constraints and in relatively short time.

Software Processing

Image Analysis Pipeline

Image Analysis Pipeline

Image Analysis Pipeline

Image Analysis Pipeline

Image Analysis Pipeline

Workflow chart of ZEISS arivis Pro pipelines used for segmentation of cellular structures

Each PART labeled under the steps refers to a specific ZEISS arivis Pro pipeline (See the Table below for more details). Each step in this sequence highlights the ZEISS arivis Pro pipeline operators that were used to achieve the image segmentation of cell features described in this study.

Selected ZEISS arivis Pro Analysis Pipeline Workflow Operations

Purpose(s)
Pipeline(s)

Deep Learning Segmentation

Use arivis Cloud-trained networks to infer 3D arivis Pro Objects for the mitochondria and nucleus.

PART_1_DL_Segmentation_of_Mitochondria
PART_2_Nucleus_Nuclear_Membrane_Masking

Segment Feature Filter

Filter out small fragments or mistakes from segmentation.

Filter for true under-NPC pocket objects based on distance to the nuclear membrane.

PART_1... (by voxel count)
PART_2... (by voxel count)
PART_10_ML_Segmentation_Whole_Cell (by voxel count)
PART_11_Create_Mask_of_Pocket_Layer (by voxel count)
PART_13_LocThresh_Water_RegionGrow_Pockets (by 3D Distance microns)

Segment Morphology

Smooth surfaces of objects (open) and fill in missing regions (close).

Grow (dilate) and shrink (erode) nucleus and pocket objects.

PART_1... (open and close)
PART_2... (dilate and erode)
PART_11... (dilate)
PART_13... (open and close)

Object Math

Derive objects for the nuclear membrane mask and the band of nucleus that contains under-NPC pockets.

PART_2... (subtract)
PART_11… (intersect)

Object Mask

Make new images and channels that contain pixels inside specific objects (such as the nuclear membrane or pockets under NPC). All pixels outside these regions are set to zero.

PART_3_Creation_Nuclear_Membrane_Channel
PART_12_Create_Pocket_Channel
PART_13... PART_14_Creation_Masks_Above_WaterRG_objects

Image Math

Combine enhanced images with raw images to facilitate segmentation.

Derive useful masks from the previously generated masks of under-NPC pockets and the nucleus.

PART_13... (averaging)

PART_14… (subtraction and addition)

Distances

Measure distances from pocket objects to nuclear membrane.

PART_13... (surface to surface)

Threshold Filter

Define the irregular 3D borders of under NPC-pockets, using an adaptive local mean to set thresholds.

PART_13...

Machine Learning Segmenter

Segment the entire cell volume.

PART_10…

Denoising

Enhance the pixels representing the under-NPC pockets.

PART_13... (median, particle enhancement, and mean)

Watershed

Segment the pocket regions under the NPCs and split touching pockets into individual objects.

PART_13...

Region Growing

Grow pocket objects to ensure pockets are full and complete.

PART_13...

Morphology Filter

Connect (closing) and expand (dilation) patches of pixels as part of a no-code method to migrate the under-NPC masks outward and onto the nuclear pores.

PART_14… (dilation/closing)

Blob Finder

Individualize the masks that were migrated onto the pores.

PART_14…

ZEISS arivis Pro Analysis Pipelines Organized by Workflow

Segmentation of parts of the cell

PART_1_DL_Segmentation_of_Mitochondria
PART_2_Nucleus_Nuclear_Membrane_Masking
PART_3_Creation_Nuclear_Membrane_Channel
PART_10_ML_Segmentation_Whole_Cell

Testing whether under-NPC pockets can be segmented with a 2D DL approach

PART_4_Intitial_DL_Segmentation_UnderNPC_Objects
PART_5_Refinement_Filtering_UnderNPC_Objects
PART_6_Creation_Masks_Above_UnderNPC_objects
PART_7_Find_FilteredUNPCs_Far_From_Hats
PART_8_Creation_Masks_Above_FAR_UnderNPC_objects
PART_9_Filtering_for_Best_Hats

Using 3D operations to segment the under-NPC pockets

PART_11_Create_Mask_of_PocketLayer
PART_12_Create_Pocket_Channel
PART_13_LocThresh_Water_RegionGrow_Pockets

Using the segmented under-NPC pockets to derive visualization and ground truth for pores

PART_14_Creation_Masks_Above_WaterRG_objects
PART_15_Making a Ground Truth Volume

Execution in ZEISS arivis Pro

In this video tutorial, see how to segment subcellular structures in a FIB-SEM dataset in ZEISS arivis Pro using a Deep Learning model trained in arivis Cloud. An array of operations in ZEISS arivis Pro help to filter deep learning predictions for accurate segmentation of Mitochondria, Nuclear membrane and Nuclear Pore Complexes.

In this video tutorial, see how to segment the pocket layer to highlight the nuclear pore channels in a 3D volume. We will use a number of operations such as the Object Math and Object Mask operations to create a new pocket layer channel.

Results

 Segmented FIB-SEM HeLa cell

Figure 2A:  Segmentation results from a deep learning trained model can predict the percent of cell volume for organelles. The arivis Cloud-trained deep learning model was used to segment the FIB-SEM HeLa cell. This segmentation included the nucleus and the mitochondrial network.

Segmentation and measurements of organelles

The first step in our cell-profiling workflow was to use the neural network models from our arivis Cloud training to automate measurement of organelle volume. The objects produced by our arivis Cloud models for the mitochondria and nucleus required several improvements by both ZEISS arivis Pro Analysis Pipeline operations and manual proof-editing. This end products of the segmentation workflow are objects representing the volumes of mitochondria and the nucleus (Figure 2A).

Volume of the entire cell

Figure 2B: A machine-learning pixel classifier was used to calculate the volume of the entire cell.

To compare the volume of these organelles with respect to the entire cell volume, we modeled the entire HeLa cell as a single object (Figure 2B) using the pixel classifier Machine Learning tool in ZEISS arivis Pro. This machine learning classifier is based on a random forest algorithm and uses a few manually labeled pixels to classify all pixels within the image volume. We trained 2 classes, one for the cell and the other for the resin outside the cell.

Percentages of total cell volume occupied by mitochondrial network and nucleus

Figure 2C:  Plotting the percentages of total cell volume occupied by mitochondrial network and nucleus demonstrates the ability of this deep-learning approach to automate the process of calculating organelle volume in FIB-SEM data

ZEISS arivis Pro computes the volume for all 3D Objects, which made it easy to calculate the percentage of total cell volume occupied by each organelle (Figure 2C). Our profiling was consistent with previous measurements, which have shown that mitochondrial volume is on average ~10% of the cytoplasm volume within HeLa cells (Posakony et al. 1977)2.

Mitochondria color-coded according to this ratio to visualize their distribution in relation to the nucleus

Figure 3A:  Mitochondrial surface area-to-volume ratios are negatively correlated with the distance to membranes. Each distinct mitochondrion was measured for its surface-area to volume ratio. Mitochondria were color-coded according to this ratio to visualize their distribution in relation to the nucleus.

Mitochondrial characterization and spatial classification

Once we had segmented these organelles, we characterized their distribution within the cell. Specifically, we sought to visualize the distribution the surface-to-volume ratios of mitochondria. Color-coding the mitochondria based on this ratio highlights distinct distributions across the cell with respect to the nucleus (Figure 3A) and the cell surface (Figure 3B).

Mitochondria color-coded according to this ratio to visualize their distribution in relation to the overall outline of the cell.

Figure 3B:  Each distinct mitochondrion was measured for its surface-area to volume ratio. Mitochondria were color-coded according to this ratio to visualize their distribution in relation to the overall outline of the cell.

We then set up analysis pipelines in ZEISS arivis Pro to compute the distances of mitochondria to these cellular structures. Taking the distances of each mitochondrion’s center of geometry to the nuclear membrane (Figure 3C) or the plasma membrane (Figure 3D) did not result in significant correlations. However, when combining these two membranes to measure the minimum distance of each mitochondrial center of geometry to either membrane showed a significant correlation (Figure 3E).

These measurements highlight the ability of this approach to profile multiple distances between cell organelles and identify significant correlations. This method can be used with any cell structures that have been segmented and can measure distances between object surfaces or centers of geometry. Moreover, this approach can be scaled using the arivis VisionHub, so that multiple cell image sets can be analyzed in parallel and produce automated, high-quality profiles.

Mitochondrial distance to the nuclear membrane

Figure 3C:  The surface-area to volume ratio was not significantly correlated (p > 0.05) with mitochondrial distance to the nuclear membrane, when these membranes were measured individually.

Mitochondrial distance to the plasma membrane

Figure 3D:  The surface-area to volume ratio was not significantly correlated (p > 0.05) with mitochondrial distance to the plasma membrane, when these membranes were measured individually.

Nuclear and plasma membrane minimum distance

Figure 3E:  However, when the nuclear and plasma membrane were combined, so that the minimum distance to either of these two membranes was measured, the result was highly significant (p-value < 0.0001) (E). P-values displayed represents the probability that the resulting slope diverged from a null slope of zero by chance.

Identification of pocket objects under the nuclear membrane

Figure 4A:  Identification of pocket objects under the nuclear membrane. Taking the object resulting from arivis Cloud-trained model representing the nucleus.

3D segmentation and distribution analysis of nuclear pore complex regions

The accuracy of our whole nucleus and nuclear membrane masks enabled us to use the volumetric rendering and clipping tools in ZEISS arivis Pro to explore the nucleus and nucleus-associated structures in 3D. Struck with how well we could see the nuclear pores in this volume, we wondered whether we could use arivis Cloud and ZEISS arivis Pro to segment and measure them.

However, within individual 2D planes the nuclear pores are difficult to recognize. The resolution of the image provides only a 100-150 voxels per pore (for reference, the total pixels in the image is more than 1 billion) and the 3D structure of each pore is uniquely oriented to the curvature of the nuclear membrane. Thus, a direct, traditional 2D deep learning approach would require extremely tedious annotating of NPCs in all possible orientations and have to cover all variability in sample preparation and image acquisition. We decided to take advantage of relatively large pockets under the pores, which we discovered were in a 1:1 stratified relationship with pores throughout the nucleus and can be segmented in 3D. These pockets are regions with presumably less chromatin density, more transcripts, mobile protein complexes, etc.

Nuclear membrane and the pocket regions under nuclear pore complexes

Figure 4B:  Erosion and object math were used to identify the nuclear membrane and the pocket regions under nuclear pore complexes.

To generate 3D models of these distinct regions, we used our nuclear surface a starting point for a series of 3D processing, segmentation, and object-modifying operations in ZEISS arivis Pro. First, using the nucleus object (Figure 4A), an image mask was created to represent the volume just internal to the nuclear membrane where the low-density ‘pockets’ are located (Figure 4B). Several image processing steps were performed to emphasize the pockets within the images: image inversion and a 3D adaptive thresholding operation to remove lower pixel intensities outside the pockets (Figure 4C-D), and masking of the image based on the pocket layer followed by 3D particle enhancement of the pocket region (Figure 4E). Next, several operations were done in sequence to resolve the pockets: a watershed algorithm separates the pockets, a 3D region-growing operation to extend them beyond the borders of the arbitrary pocket layer, and a 3D distance operation to calculate the closest surface-to-surface distance of each pocket object to the nuclear membrane. Finally, a filtering operation was run to keep only the objects located closest to the nuclear membrane (Figure 4F).

Pockets under nuclear pore complexes

Figure 4C:  Pocket region under nuclear pore complexes.

Image inversion and thresholding were done to enhance these regions

Figure 4D: Image inversion and thresholding were done to enhance these regions.

Masking using the membrane region and denoising was performed to accentuate these pockets

Figure 4E:  Masking using the membrane region and denoising was performed to accentuate these pockets.

Defining objects representing these pockets

Figure 4F:  Segmentation, followed by region growing and filtering based upon distance to the membrane, allowed defining objects representing these pockets.

Nuclear pore complexes (NPCs) have variable density distribution across areas of the nucleus

Figure 5A: Nuclear pore complexes (NPCs) have variable density distribution across areas of the nucleus. The densities of nuclear pore complexes were determined by taking the 3D centroid of each NPC object and calculating a Gaussian kernel density, with a kernel radius of 0.1 µm, using a custom python script.

We used these objects as proxies to view and quantify the 3D distribution of NPCs throughout the nuclear membrane. To accomplish this, we used the ZEISS arivis Pro python application program interface (API) to make a custom python operator integrated with the arivis Pro software. Specifically, this python operator takes the 3D centroids of all the objects and uses the kernel density function within the scikit-learn python library to calculate their 3D densities. This kernel density function calculates a score based on the number of other objects in proximity, with a Gaussian smoothing of scores over a given radius. This custom python operator then exported the resulting density scores as a numeric feature of these objects within arivis Pro. Color-coding these objects according to their density score permitted visual assessment of the distribution of these scores (Figure 5A).

The density distribution of NPCs is significantly different across separate areas of the nucleus

Figure 5B: The density distribution of NPCs is significantly different across separate areas of the nucleus. Sectioning the nucleus into two sections, a larger and a smaller section, based upon the nuclear cleavage furrow reveals significant differences in kernel density scores.

To further characterize NPC distribution across the nuclear membrane, the nucleus was divided into two sections based on the nuclear invagination (Figure 5B).

Calculation of the significance of differences between the scores in these two sections.

Figure 5C:  A two-tailed t-test was performed to calculate the significance of differences between the scores in these two sections.

Taking the density scores of these two sections highlights that NPC density is higher within the smaller section of the nucleus with higher curvature. In contrast, the larger section with a lower curvature degree has more low-density regions for nuclear pores. Overall, this indicates that there is variation in our measured nuclear pore density over different portions of the nucleus.

Pockets under nuclear pores can be used to identify and visualized nuclear pore complexes

Figure 6A: Pockets under nuclear pores can be used to identify and visualized nuclear pore complexes. Several processing steps were done to create masks of nuclear pores complexes from the pocket objects. Taking the pocket objects.

Finally, we set out to mask NPCs to create ground truth for a new 3D deep learning neural network that will segment them directly. We were able to use the under-NPC- objects to derive objects representing the actual pores. Our initial idea to segment the NPC particles was to compute a directional regional growth vector from the 3D geometric centroid of each under-NPC object towards the nuclear membrane, which would in high probability put us within the associated NPC and allow us to create an accurate mask. Instead, we discovered a way to achieve a roughly similar result without having to write any code. Several masking and morphology operations were utilized to segment the volume between each pocket and the outer part of the nuclear membrane (Figure 6A-D). This volume was then dilated to cover the entire NPC (Figure 6E).

Taking the pocket objects, a binary masked image was generated

Figure 6B:  Taking the pocket objects, a binary masked image was generated.

Closing operation of the pockets to nuclear membrane

Figure 6C:  Followed by a closing operation of the pockets to nuclear membrane.

Nuclear membrane and pockets were used to mask the white space in shown in panel C

Figure 6D:  Next, the nuclear membrane and pockets were used to mask the white space in shown in panel C.

These objects were then dilated

Figure 6E: These objects were then dilated.

Visualization of nuclear pore complexes

Figure 6F:  Masking using these objects (see figures above) enhances the visualization of nuclear pore complexes.

Making a new image mask from these objects highlights the nuclear pore complexes visualized on Figure 6F. While not as precise a result as we would get from our original concept, this worked remarkably well and with manual curation we will be able to create ground truth annotations for many thousands of pores for the subsequent deep learning training.

Summary

In this study, we present novel approaches to efficiently segment sub-cellular structures from FIB-SEM imaging data. Using the ZEISS arivis Cloud platform to perform convolutional neural network training, along with the ZEISS arivis Pro image analysis software, we were able to both expedite the creation of objects representing cellular structures (mitochondria and nuclei) and use these structures to develop analysis pipelines to identify additional smaller structures (nuclear pore regions). Moreover, we took advantage of the arivis Pro Python API to extend the analytical capabilities of arivis Pro to measure the density of nuclear pore regions across the nucleus.

Our findings open new avenues for workflows utilizing a combination of traditional and deep learning algorithms, combined with prior biological knowledge. For instance, our approach of generating objects in the proximity of the NPCs can help identify nuclear pores in 3D regions, where the presence of a nuclear pore may be unclear from the plane-wise 2D analysis only. These 3D objects representing the nuclear pores can be used as ground truth for deep learning training of neural networks. Specifically, because these nuclear pore objects are 3D, varying XYZ planes of these 3D regions can be taken for the ground truth to train the network. We plan to augment these NPC annotations along numerous image axes, thereby multiplying the number of instances constituting the nuclear pore while preserving the structural pattern of this protein complex. This approach would not be possible using the ground truth annotations on individual 2D planes only. Here we demonstrate a successful application of a complex workflow, which once established, can be scaled up for the automatic segmentation and quantitative analysis and profiling in arivis VisionHub.

Try It for Yourself

Download a trial version of ZEISS arivis Pro


Share this article

  • 1

    Parlakgül, G., Arruda, A.P., Pang, S., Cagampan, E., Min, N., Güney, E., Lee, G.Y., Inouye, K., Hess, H.F., Xu, C.S. and Hotamışlıgil, G.S., 2022. Regulation of liver subcellular architecture controls metabolic homeostasis. Nature, 603(7902), pp.736-742

      

  • 2

    Posakony, J.W., England, J.M. and Attardi, G., 1977. Mitochondrial growth and division during the cell cycle in HeLa cells. The Journal of cell biology, 74(2), pp.468-491.