In the field of biology and medicine, researchers often use microscopes to observe cell details that are not available to the naked eye. Although the use of transmitted light microscopy (unilateral irradiation of biological samples to produce images) is relatively simple to observe and the live culture samples are well tolerated, the resulting images are difficult to assess correctly. Fluorescence microscopy uses fluorescent molecules to stain the target (such as the nucleus) to be observed. This simplifies the analysis process, but it still requires complex sample preparation. With the increasing use of machine-learning techniques, including automatic image quality assessment algorithms and assisting pathologists in diagnosing cancerous tissue, in the field of microscopy, Google has therefore considered whether it can be developed in combination with both microscopy techniques using transmitted and microscopy. A deep learning system to minimize the inadequacies of both.
On April 12th, Google released a research blog that combines transmitted light microscope and fluorescence microscope, and used deep learning to perform color separation and fluorescence labeling on microscope cell images. Lei Feng network AI scientific review compiles its research content as follows: :
In the April 12 issue of Cell magazine, Google published the paper “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” which showed that the deep neural network can predict the fluorescence image through the perspective of the light image without modifying the cells. Labeled, useful images can be generated, which will enable long-term follow-up analysis of unmodified cells, minimal invasive cytological examinations in cell therapy, and the ability to analyze large numbers of tags simultaneously. For this study, Google open sourced web design, complete training data and test data, trained model checkpoints, and sample code.
Although easy to use, transmitted-light microscopy produces images that are difficult to distinguish. For example, the image below is an image obtained by a phase contrast microscope. The color depth of a pixel indicates the degree of phase change when the light passes through the sample.
The images above are all images of human motoneuron cultures derived from pluripotent stem cells under transmission light microscope (using phase contrast method). Example Figure 1: The possible neurons are shown. Example Figure 2: Defects in the observed image mask the cells beneath it. Example Figure 3: Neuritic image. Example Figure 4: May be dead cells. The upper scale bar: 40μm. The above images and figures are from the Finkbeiner Laboratory at Stone Institute.
In the above figure, it is difficult to tell the number of cells in the cell population of the example in Figure 1, or to illustrate the position and state of cells in Figure 4 (hint: there is a flat, almost invisible flat cell in the middle of the top). It is also difficult to keep the fine structure in focus all the time, such as the neuron dendrites shown in Figure 3.
We can acquire more information under a transmitted light microscope by acquiring images of different z heights: a set of images about the (x,y) position where the z (the distance from the camera) is controlled systematically. This results in focusing or defocusing of different parts of the cell, thus providing 3D structural information of the sample cells. Unfortunately, usually only experienced analysts can understand this image at different heights. How to analyze such different height images is also a huge challenge in the automated analysis process. Below is an example of a z stack.
The same cell's phase contrast z stack. It should be noted how the cell appearance will change when the focus shifts. We can now observe that the blurry shape in the lower right corner of the example in Figure 1 is a single ellipse unit. The most abundant cells in the example in Figure 4 are higher than the top cells, which may indicate that it has undergone programmed cell death.
Compared to the fluoroscopic image above, the image observed with the fluorescence microscope below is much easier to analyze, because the researchers will carefully observe the contents of the observations using fluorescence. For example, most human cells have only one cell nucleus and therefore can be labeled with a nucleus (blue mark in the image below), which makes it possible to use a simple tool to count the number of cells in an image.
Above is the image of the same cell under a fluorescence microscope. Blue fluorescence labels DNA to highlight the nucleus. The green fluorescent marker only exists in a protein of a neural substructure in the dendrite. The red fluorescent marker is only present in proteins of another neural substructure in the axon. The separation of fluorescent labels helps researchers to more easily understand the sample. For example, by marking the green and red fluorescent markers in Figure 1, you can confirm that this is a neural cluster. The red fluorescent label in Figure 3 represents an axon rather than a dendrite. The blue fluorescent marker in the upper left corner of Figure 4 reveals the nuclei that were previously difficult to observe with a light microscope, whereas the cells on the left lack a blue fluorescent marker and therefore it is a DNA-free cell debris.
At the same time, there are obvious flaws in the fluorescence microscope. First, the preparation of the sample and its fluorescent labeling itself bring about complexity and variability. Second, when there are many and different fluorescent markers in the sample, the spectral overlap can make it difficult to tell which color corresponds to which marker. Therefore, researchers are usually restricted to use three or four markers simultaneously in the same sample to avoid confusion. Third, fluorescent labels may be toxic to, and sometimes cause death in, sample cells. This defect also makes fluorescent labels difficult to obtain in longitudinal studies requiring long observations of cells.
Walk with deep learning and see more possibilities
In Google's paper, the authors demonstrated that deep neural networks can predict their color separation fluorescence images based on the z-stack of transmitted light. For this purpose, we created a dataset that matches the projection fluorescence z-stack with the color separation fluorescence image, and trained the neural network to predict its color separation fluorescence image from the projection light z stack. The following is an illustration of this training process.
This is an overview of the training system: (A) Data set for the training example: The pixel matching set of the transmitted light image and the fluorescent marker image of the z-stack in the same sample picture. Fluorescently labeled images produced using different color fluorescence labels changed color changes as different training examples were switched; similar chessboard images were not fluorescently labeled due to unspecified instances. The untrained in-depth network (B) predicts the data A, and after training, the prediction of the data A becomes (C). Data A's projection light z stack image (D). (E): Using trained deep neural, predict the fluorescent label (C) of the A data from each new image pixel information (D) of the A data.
In the course of this research, Google was inspired by Inception's modular design and developed a new type of neural network consisting of three basic building blocks: The first, to maintain the proportion of the module configuration, it will not change the size of the feature's spatial scale Second, a reduced-scale module configuration, which scales the space to 2 times; a third, scaling ratio, scales the space to half. This makes the network architecture design challenge a simpler two problem: the arranging part of the building block (macroarchitecture) and the design part of the building block itself (microarchitecture). Google used the design principles discussed earlier in this article to solve the first problem, and the second problem was to use Google Hypertune's automatic search.
To ensure that this research method is reasonable, Google used data from the Alphabet Lab and two external partners to validate the model: Gladstone Institute Steve Finkbeiner Laboratory and Harvard Rubin Laboratory. These data cover three modes of transmitted light imaging (bright field, phase contrast and differential interference contrast) and three types of culture (human motoneurons from induced pluripotent stem cells, rat cortical cultures, and human breast cancer cells). Google found that this method can accurately predict several fluorescent markers including the nucleus, cell types (such as nerves), and cell states (such as cell death). The figure below shows the prediction results of the separation of fluorescent markers after the model has imported the transmitted light of the neuron.
Transmitted Light-Output Fluorescent Labeling Prediction Results for Input Neurons
The example plot shows the same cell image imaged by the projection light and the fluorescent marker, and the Google model predicts that it produces a fluorescent marker. Although the input image has an artifact (marker 2 image), the model still predicts that the correct fluorescent label is generated. (Mark 3 image) These axons are presumed to be based on the closest distance between cells. The (mark 4 image) shows the cells that are hard to find on the top and correctly identifies the left-hand object as DNA-free cell debris.
Try it yourself!
Google has open sourced the model, complete datasets, training, reasoning code, and an example. Google also claims that new annotations/tags can be generated with minimal additional data training: In relevant papers and sample code, Google demonstrated that fluorescent markers can be learned from a single image. This is thanks to migration learning: if the model has mastered similar tasks, the model can learn new tasks faster and use less training data.
Google hopes to be able to generate marked, useful images without modifying the cells, which will also open up entirely new types of experiments for biology and medical research. If you would like to try this technique in your own research, please read the In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images paper or go to the github page to view the model code!