Focused ion beam scanning electron microscopy (FIB-SEM) is a powerful imaging tool that achieves resolution under 10 nm. Though it produces highly detailed 3D image volumes, one drawback is that it is difficult to use standard image processing segmentation algorithms to detect many cellular structures of interest. This is largely because FIB-SEM highlights the entirety of the cell, generating images dense with cellular features, structural edges, and varying pixel combinations. Due to this difficulty, quantitative analysis of FIB-SEM data often relies on manual drawing of features of interest on 2D slices of a 3D image volume. Though this manual approach can be used to identify and reconstruct 3D objects from the image volume, it is tedious and time-consuming.
Previous work has focused on moving beyond this reliance on manual annotation to segment cellular structures from FIB-SEM image volumes. Notably, Gunes Parlakgül et al. (2022)1 recently took a deep-learning approach to identify mitochondria, nucleus, endoplasmic reticulum, and lipid droplets within FIB-SEM image volumes of liver cells. The resulting neural network models trained on these organelles resulted in step toward a comprehensive automated cell-profiler workflow for FIB-SEM image data, where the models could be used on multiple image volumes to efficiently quantify these organelles. Here, we take a similar deep-learning approach, with our goal being the development of a cell-profiling workflow that uses neural-network training and image analysis tools that are readily accessible to researchers and do not require coding.
In this study, we highlight the use the APEER online deep learning platform in combination with the arivis Vision4D image analysis software toolkit to facilitate automated profiling based on both large and small structures within a FIB-SEM image of a HeLa cell. The original dataset was kindly provided by Anna Steyer and Yannick Schwab, EMBL, Heidelberg, Germany. We used APEER to train neural networks that identify large organelles: mitochondria and the nucleus. Using the APEER platform, we manually draw a subset of the instances of these cellular features and train neural network models that successfully predict the remainder of the instances of cellular features within the FIB-SEM image volume. These APEER-trained models were then used first to infer mitochondria and the nucleus in arivis Vison4D. We then built Vision4D analysis pipelines to filter and improve the initial inferences into usable 3D segments.
Having defined mitochondria and nucleus, we used the measurement and visualization tools in Vision4D to examine the cytoplasmic organization of the HeLa cell. We noticed that, even though our images are of low resolution and quality compared to current state-of-the-art FIB-SEM, we could visualize the nuclear membrane and the nuclear pores and sought to develop a method to assess their distribution. Through a series of pipeline workflows in Vision4D we identify the nuclear pore complex (NPC) regions of the nuclear membrane. Our approach utilizes 3D operations that enable enhancement and segmentation of 3D spatially resolved NPC-associated objects in a way that would not be possible by segmenting each 2D plane separately within the image stack. These objects are reliable proxies for the NPCs in distribution analysis that be subsequently used to make 3D masks of individual NPCs for neural network trainings that respect the 3D nature of the data, which is necessary for accurate segmentation in cells.
Overall, this work highlights how the APEER deep learning approach, when combined with the powerful 3D tools of Vision4D, enables 3D segmentation and measurements within FIB-SEM image sets.