Deep learning-based virtual histology staining of unlabelled tissue

By Aydogan Ozcan and Yair Rivenson
Deep learning-based virtual histology staining of unlabelled tissue

Histopathology dates back to the 19th century and it has been one of the gold-standard diagnostic methods used in pathology. If a biopsy is needed following a medical examination, or during a surgical operation, a tissue sample is taken from the patient, which is then sectioned into micrometer thin layers. These sections contain microscopic information regarding the pathological state of the tissue; however such thin tissue sections are transparent and do not present sufficient contrast under a standard light microscope. Histochemistry makes use of the cellular and sub-cellular chemical environment of the specimen to bind special chromophores to specific tissue constituents, creating color contrast under a visible light microscope that forms the basis for expert diagnosticians and pathologists to diagnose abnormalities in tissue specimens.

The standard process of staining tissue samples in histopathology is time consuming as it is labor intensive, and requires a dedicated laboratory setting, with chemical reagents and trained personnel, e.g., histotechnologists.  Staining variability among labs and histotechnologists may lead to misdiagnoses, and creates quality control challenges. Furthermore, currently used staining methods do not preserve tissue samples, which is a limitation since advanced molecular analysis of the same tissue sample cannot be easily performed after the initial staining process.

Recognizing these bottlenecks, our research team focused on creating a machine learning framework to perform virtual staining of label-free tissue [1]. At first, we wanted to find a robust and simple way to introduce contrast in the microscopic images of label-free tissue sections. To do so, we chose to use tissue autofluorescence, which results from endogenous fluorophores, naturally embedded within the specimen. We decided to use near-UV fluorescence excitation band since it can be used to effectively excite various tissue constituents, and can be easily acquired using any standard fluorescence microscope. In the training phase (which is a one-time effort) we used thousands of image patches that include accurately aligned pairs of label-free tissue autofluorescence images and the corresponding bright-field images of the histologically stained versions of the same tissue samples. Following this multi-stage deep neural network training process based on the concept of generative adversarial networks, we developed a deep learning-based method (Figure 1) to take a microscopic image of the naturally present fluorescent compounds in unstained/label-free tissue sections and transform this autofluorescence image into a bright-field microscope equivalent image of the same sample, as if it was taken after the standard tissue staining process. Stated differently, we used deep learning to virtually stain label-free tissue samples (Figure 2), replacing the manual and laborious processing and staining steps that are normally performed by medical personnel, saving labor, cost and time by substituting most of the tasks performed a histotechnologist with a trained neural network [1].

The success of this deep learning-powered virtual staining method was demonstrated for multiple tissue types (e.g., kidney, lung, liver, ovary, salivary gland, and thyroid) and three different stains, i.e., H&E, Masson’s trichrome and Jones’ silver stain. The efficacy of our virtual staining results was independently evaluated by a panel of board-certified pathologists who were blinded to the origin of the examined images, i.e., the pathologists did not know which images were actually stained by expert technicians and which images were virtually stained by our neural network. The conclusion of this blinded study, directed by Dr. W. Dean Wallace of UCLA Department of Pathology and Laboratory Medicine, revealed no clinically significant difference in the staining quality and the diagnoses resulting from the two sets of images [1].

When preparing the image data for this blind comparison, we also realized that another important advantage of our virtual staining method is the standardization of the staining process since a trained neural network also eliminates the staining variability that is frequently observed among technicians and histopathology laboratories, which can cause misdiagnosis or misclassification of tissue specimen.
This virtual staining method enabled by deep learning will significantly reduce cost and sample preparation time, while also saving expert labor. Since it only requires a standard fluorescence microscope and a simple computer (such as a laptop), it will be especially transformative for pathology needs in resource-limited settings and developing countries.

Looking forward, we envision that this AI-based virtual staining technology, after going through further development and validation with large-scale clinical studies, will create a paradigm shift in the field of histopathology. In addition to replacing the standard workflow in histopathology labs with a much simpler, faster and more cost-effective alternative, it will also lead to new capabilities that are nearly impossible to achieve with today’s standard methods such as the simultaneous virtual staining of the same sectioned tissue with multiple types of stains, while also preserving the unstained sample for further molecular analysis, if needed. In addition to these exciting opportunities enabled by our method, deep learning-based virtual staining also eliminates the need for a well-trained histotechnologist, helping us with another bottleneck for running pathology services in resource-scarce countries, where the availability of expert medical personnel is often limited. This AI-based virtual staining technology can also be used in surgery rooms to rapidly assess tumor margins, providing highly-needed and critical guidance for surgeons during an operation.

Finally, we should note that multiple excitation and emission wavelengths, as well as other label-free imaging modalities such as quantitative phase microscopy or optical coherence tomography can also be used to acquire the needed contrast to generate virtually stained images of unlabelled tissue samples, and we expect to see a plethora of new opportunities created by data-driven cross-modality image transformations [2-3].

[1] Y. Rivenson, et al., “Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning,” Nature Biomedical Engineering DOI: 10.1038/s41551-019-0362-y (2019)

[2] Y. Wu, et al., “Bright-field holography: Cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram,” Light: Science & Applications DOI: 10.1038/s41377-019-0139-9 (2019)

[3] H. Wang, et al., “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nature Methods DOI: 10.1038/s41592-018-0239-0 (2019)

Figure 1: An overview of deep learning-based virtual histology staining.

Figure 2: Examples of virtually stained tissue images.

Please sign in or register for FREE

If you are a registered user on Nature Portfolio Bioengineering Community, please sign in