AI in Microscopy: Deep Learning for Image Analysis

Abstract
Advanced microscopy techniques generate increasingly vast and complex datasets that require sophisticated computational tools for analysis. Artificial Intelligence (AI), particularly Machine Learning and Deep Learning, is revolutionizing microscopy workflows. AI enhances every step of the microscopy workflow, from data acquisition and preprocessing to image segmentation and high-level analysis. AI integration promises unprecedented accuracy and precision in segmenting regions of interest within images, a crucial capability for many microscopy applications.
Key Learnings:
- Machine Learning (ML) is fast to train and suitable for many applications, but it has limitations, particularly in segmenting objects against complex backgrounds.
- Deep Learning uses a large number of training parameters to capture complex textural details in images. This enables robust image segmentation even when intensity profiles vary.
- There are two types of Deep Learning (DL) segmentation: Semantic Segmentation, which is better for segmenting large regions, and Instance Segmentation, which is suitable for segmenting different objects within images.


What are AI, ML, and DL?
There is a hierarchical relationship between Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL):
- AI: the broadest concept, encompassing any technique that enables computers to mimic human intelligence.
- ML: a subset of AI, focuses on algorithms that allow machines to learn from data and make predictions or decisions based on it.
- DL: the most specialized of the three, is a subset of ML that uses artificial neural networks to process vast amounts of data, mimicking the human brain's structure and function.


Conventional Machine Learning
Conventional ML relies on human-designed feature extraction, where specific characteristics or patterns are identified and isolated from the raw image data. These engineered features are then fed into a Machine Learning classifier, such as a random forest algorithm. This classifier learns to categorize or make predictions based on the extracted features.


Deep Learning
Unlike conventional Machine Learning, where features are manually engineered, DL algorithms – particularly Convolutional Neural Networks (CNNs) – learn to extract relevant features directly from raw data. The left side of this figure shows the input image and the network architecture, with multiple layers that progressively process the image. The right side displays the learned feature kernels and the resulting feature maps. These kernels act as filters, automatically detecting patterns at various levels of abstraction ─ from simple edges to complex structures. As the network deepens, it learns increasingly sophisticated features, enabling it to capture intricate details and relationships within the data. This automatic feature learning from vast amounts of training data is what gives Deep Learning its power and flexibility in image analysis tasks, surpassing traditional Machine Learning approaches in many complex scenarios.


Sample image courtesy of Anna Steyer and Yannick Schwab, EMBL. Segmentation by Dr. Christopfer Zugates, ZEISS Dublin Demo Center US
Sample image courtesy of Anna Steyer and Yannick Schwab, EMBL. Segmentation by Dr. Christopfer Zugates, ZEISS Dublin Demo Center US
Machine Learning vs. Deep Learning for Image Segmentation
ML is quick to train and requires relatively little labeled data, making it suitable for many tasks. However, it struggles with complex scenarios, such as segmenting objects against busy backgrounds. DL, on the other hand, excels in these areas by leveraging numerous training parameters to capture complex textural information. The following examples highlight the advantages of DL over ML in image segmentation.


Mitochondria Segmentation
The figure illustrates DL's superior performance in segmenting mitochondria. While the ML model works well on the training image (slice 50), it struggles with adjacent slices (49 and 51), mislabeling partial mitochondria and background pixels. In contrast, the DL model achieves excellent segmentation on images not used in training, demonstrating greater generalizability.


Sample courtesy of: Bernthaler group at Hochschule Aalen.
Sample courtesy of: Bernthaler group at Hochschule Aalen.
Grain Segmentation
The figure shows grain boundary segmentation in an Al2O3 micrograph. Although both ML and DL results appear accurate initially, closer inspection (blue arrows) reveals missed segmentations with ML, incorrectly suggesting larger grains. Consequently, grain size analysis based on ML results leads to an incorrect grain size distribution. The grain maps show a large grain (in red) in the ML-segmented image, while the DL-segmented image accurately represents the true grain distribution.
Semantic vs. Instance Segmentation
There are two primary approaches to Deep Learning segmentation:
- Semantic segmentation assigns class labels down to the pixel level, making it suitable for segmenting large regions, such as ferrite and martensite in steels or various tissue sections in biological samples.
- Instance segmentation assigns class labels to individual objects, which is ideal when detailed object-level information is required, such as grains in alloys or cells in tissues.