A Deep Learning-Based Approach for Breast BI-RADS Prediction on Shear Wave Elastography Images

authors:

avatar Ali Shabanzadeh 1 , * , avatar Shakiba Moradi 2 , avatar Parvaneh (Masoumeh) Gity ORCID 3 , avatar Mostafa Ghelich Oghli 1

Intelligent Imaging Technology Research Center, Med Fanavarn Plus Co., Karaj, Iran
Sharif University of Technology, Tehran, Iran
Tehran University of Medical Sciences, Tehran, Iran

how to cite: Shabanzadeh A, Moradi S, Gity P (, Ghelich Oghli M. A Deep Learning-Based Approach for Breast BI-RADS Prediction on Shear Wave Elastography Images. I J Radiol. 2019;16(Special Issue):e99141. https://doi.org/10.5812/iranjradiol.99141.

Abstract

Background:

Breast cancer is the most common type of cancer among women. About one of every eight women is diagnosed with breast cancer during her lifetime. Malignant tissue is stiffer than normal and benign tissues. This stiffness could be evaluated by elastography. The American College of Radiology (ACR) has published a quality assurance tool named Breast Imaging-Reporting and Data System (BI-RADS) to standardize breast cancer reporting. Although it was originally designed to use with mammography, it now contains several features for various imaging modalities. Among technologies, shear wave elastography (SWE) has shown promising results in breast lesion classification.

Objectives:

In this paper, we present the capability of the convolutional neural network in the prediction of B-RADS using SWE images.

Methods:

A comprehensive dataset of SWE images of breast tissue was provided using Esaote MyLab 9 and Aupersonic Aixplorer systems. Two hundred images related to breasts with different BI-RADS stages were gathered from the Cancer Institute, Imam Khomeini Medical Center (UICC). The data augmentation with a factor of 10 was applied to the prepared dataset. Some patients had multiple lesions and for each lesion, one or two images were acquired and stored in DICOM standards. The gold standard for the evaluation of the proposed algorithm was a biopsy, which was performed on all the examined lesions. A novel convolutional neural network was applied to the dataset to extract the visual features of images. The architecture was based on Densenet architecture, which was modified for our purpose. We used the network in both pre-training and end-to-end training strategies and the results were compared. The network was pre-trained on the Imagenet dataset due to the lack of a sufficient dataset. On the other hand, with data augmentation, the network underwent a full training strategy. Finally, the classification layer was a softmax layer, which was used to decide on the benignity or malignancy of the lump. The training and testing procedures for tumor classification were employed with five-fold cross-validation. The entire dataset was randomly divided into five equal-sized subsets on the premise that multiple images acquired from the same patient were assigned to the same subset. Four subsets together were used for training and the remaining one for testing and this process was repeated five times such that each subset was used once as the test set.

Results:

The processing hardware had a 12 GB RAM, a GPU-based graphics card with 2496 CUDA cores (Tesla K80), and an Intel Xeon CPU. The network implementation was done in the Python environment with Tensorflow r1.12 and Keras 2.2.4. The results of the proposed methods were satisfying in both pre-training and end-to-end training approaches. We used various evaluation metrics including precision, recall, F1-score, ROC curve, and training time for both strategies. The precision, recall, and F1-score were 0.93, 0.95, and 0.94 for the Densenet architecture trained from scratch and 0.97, 0.94, and 0.95 for the transfer learning approach (see Table 1). The ROC curve was plotted for both approaches and the area under the curves (AUCs) were calculated. The transfer learning approach yielded an AUC of 0.98, whereas this parameter was 0.94 for the fully-trained approach (see Figure 1). Finally, the training time of transfer learning approach was one-fifth the time of training from scratch, as it was anticipated.

Conclusion:

The results showed the superiority of the transfer learning approach in tumor classification. Higher statistical metrics with lower training time makes this approach more compatible with SWE images.

To see figure and table please refer to the PDF file.

Tran Kim Sen
2021-10-02 00:08:10
I appreciate read full text in the paper to apply in my project. Thank you for your share.