Journal of Medical Physics
ORIGINAL ARTICLE
Year
: 2020  |  Volume : 45  |  Issue : 2  |  Page : 98--106

Appraisal of deep-learning techniques on computer-aided lung cancer diagnosis with computed tomography screening


S Akila Agnes, J Anitha 
 Department of CSE, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India

Correspondence Address:
Dr. J Anitha
Department of CSE, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu
India

Abstract

Aims: Deep-learning methods are becoming versatile in the field of medical image analysis. The hand-operated examination of smaller nodules from computed tomography scans becomes a challenging and time-consuming task due to the limitation of human vision. A standardized computer-aided diagnosis (CAD) framework is required for rapid and accurate lung cancer diagnosis. The National Lung Screening Trial recommends routine screening with low-dose computed tomography among high-risk patients to reduce the risk of dying from lung cancer by early cancer detection. The evolvement of clinically acceptable CAD system for lung cancer diagnosis demands perfect prototypes for segmenting lung region, followed by identifying nodules with reduced false positives. Recently, deep-learning methods are increasingly adopted in medical image diagnosis applications. Subjects and Methods: In this study, a deep-learning-based CAD framework for lung cancer diagnosis with chest computed tomography (CT) images is built using dilated SegNet and convolutional neural networks (CNNs). A dilated SegNet model is employed to segment lung from chest CT images, and a CNN model with batch normalization is developed to identify the true nodules from all possible nodules. The dilated SegNet and CNN models have been trained on the sample cases taken from the LUNA16 dataset. The performance of the segmentation model is measured in terms of Dice coefficient, and the nodule classifier is evaluated with sensitivity. The discriminant ability of the features learned by a CNN classifier is further confirmed with principal component analysis. Results: Experimental results confirm that the dilated SegNet model segments the lung with an average Dice coefficient of 0.89 ± 0.23 and the customized CNN model yields a sensitivity of 94.8 on categorizing cancerous and noncancerous nodules. Conclusions: Thus, the proposed CNN models achieve efficient lung segmentation and two-dimensional nodule patch classification in CAD system for lung cancer diagnosis with CT screening.



How to cite this article:
Agnes S A, Anitha J. Appraisal of deep-learning techniques on computer-aided lung cancer diagnosis with computed tomography screening.J Med Phys 2020;45:98-106


How to cite this URL:
Agnes S A, Anitha J. Appraisal of deep-learning techniques on computer-aided lung cancer diagnosis with computed tomography screening. J Med Phys [serial online] 2020 [cited 2020 Oct 28 ];45:98-106
Available from: https://www.jmp.org.in/text.asp?2020/45/2/98/290213


Full Text



 Introduction



Lung cancer is the superior reason of cancer deaths worldwide.[1] People, who are identified with lung cancer in advanced stages, have a very low survival rate, and this prevents effective treatments. Earlier detection of cancer expands survival and supports people to live a long life by taking proper treatment without necessarily extending life. In the United States, every year approximately $9.6 billion are spent on lung cancer treatment. This poses a significant financial burden for the people, though they have health insurance. As the newer technologies and treatments increases, the expenditures for cancer-preventive care may increase at a faster rate than overall medical expenditures.[2] These facts create a demand on cost-effective cancer control and prevention schemes such as computer-aided lung cancer screening programs. The National Lung Screening Trial confirms that low-dose computed tomography (LDCT) screening reduces the mortality rate by lung cancer.[3] The American College of Chest Physicians provides guidelines for the successful execution of lung cancer screening program.[4] Lung cancer screening with LDCT is advised for adults with the age of 55–80 years who have about 30 years of smoking history. Routine screening with CT imaging is suggested to high-risk patients for early cancer detection. However, an extra attention is needed while repeating LDCT screening tests because it accumulates radiation exposure. The recent practice guidelines given by the American College of Chest Physicians recommended longer intervals between CT scans.[5] The US Preventive Services Task Force has reported that the consequence of radiation exposure is insignificant as compared with the cut-rate of cancer death.

Screening with LDCT helps to diagnose lung cancer, and if lung cancer is diagnosed at an earlier state before spreading to other organs, people might have a better chance of long life. However, false-positive (FP) diagnosis results may lead the people to one more advanced level radiation testing, which may harm their normal health. Hence, cautious screening and accurate diagnosis is very important. Recently, the machine-learning community has developed computerized tools and learning models for computer-aided diagnosis (CAD) systems that demonstrate clinically acceptable performance. At present, the Food and Drug Administration has given premarket approval for two CAD application domains such as breast cancer diagnosis with mammogram images and lung cancer diagnosis with chest radiographs.[6] Vapnik et al. have proposed a new artificial intelligence-based system that could learn hidden and essential information to improve CAD technology for lung cancer diagnosis.[7] In general, CAD system for lung cancer diagnosis comprises two components such as parenchyma segmentation and classification of candidate nodules. [Figure 1] shows the framework of CAD system for lung cancer diagnosis with deep learning.{Figure 1}

Parenchyma segmentation is a preliminary procedure for any clinical diagnosis system intended to simplify the early diagnosis of lung diseases. This process obtains the lung parenchyma volume from the unprocessed CT scan image by removing the undesired parts such as image artifacts, heart, spinal cord, trachea, bronchi, bone, and muscle. Classifying the normal and cancerous pulmonary nodules is an essential step in cancer diagnosis process. Pulmonary nodules are small abnormalities existing in the lung region, which are need not be cancer nodules that can be caused by old infections or other causes. On chest CT scans, a lung nodule is described as a small tumor on the lung, which varies in diameter from 3 mm to 3 cm. In general, the malignant nodules have unusual shapes, irregular surfaces, and color mutations. Detectability of cancerous nodules in the lung depends on the contrast between the nodule and the surrounding nonnodule tissue. The samples of true and false pulmonary nodule patches are illustrated in [Figure 2].{Figure 2}

Related work

Deep-learning techniques produced excellent results in various computer vision problems. The reason behind the success of deep learning is the feature learning behavior and least domain expertise effort. This approach finds the solution directly from the target problem by supervised learning method. This attracts the researchers toward deep-learning techniques for medical image analysis. Convolutional neural network (CNN) is the most popular neural network for spatial image (two-dimensional [2D] matrix) analysis.

Quite a few ConvNet architectures have been proposed for semantic segmentation that acquires the spatial features from the annotated datasets and produces a prediction map. Most of the segmentation CNN models are symmetrical architecture consisting of an encoder and an equivalent decoder. These networks demand high memory configuration and difficult to be trained on entire volumetric medical images. However, these deep models have been trained with 2D slices or small 3D crop to learn the global features by accommodating memory limitations without compromising its capability. Nie et al. have proposed a multiple fully connected CNN to segment infant brain images by fusing feature data from multiple modalities.[8] U-Net is an improved CNN that is designed to segment medical images. It is widely employed on a range of medical image analysis tasks such as liver segmentation[9] and breast segmentation.[10] SegNet[11] is a kind of CNN which is also designed for semantic segmentation of outdoor scene. SegNet architecture uses the feature maps computed from max-pooling layers in its decoder section; thus, it produces accurate results by consuming less memory during training phase. Khagi et al.[12] have suggested that the encoder-decoder network of SegNet with certain alterations can be used in medical magnetic resonance imaging image segmentation.

The existing lung parenchyma segmentation methods such as random walk, watershed segmentation, fuzzy logic, and graph search algorithms are the compound of multiple procedures that consume more time and could not afford result at a single step. Recently, few research works have been carried particularly on lung segmentation with deep neural networks that are presented in [Table 1]. Due to the computational speed and storage capacity limitations, these networks have not been trained on entire 3D data.{Table 1}

The pulmonary nodule classification is a critical task in CAD system for lung cancer. This will be done in two steps such as candidate nodule detection and FP reduction. The candidate nodules are detected using thresholding, followed by a morphological opening operation. In FP reduction phase, the false nodules are identified and dropped using classification techniques. Recognizing the suitable features for distinguishing the nodules is more challenging; hence, an automatic feature learning method is required to find more descriptive features from raw data.

The pulmonary nodule classification is done in two ways such as feature-based approach and deep-learning-based approach. In a feature-based approach, the radiological features such as nodule volume, position, appearance, texture, and so forth are extracted from the candidates, and then, a classifier is built to determine the class of the nodule. Here, obtaining and choosing the significant subset of features for an accurate lung nodule classification is a vital task. In deep-learning-based approach, a model is designed to learn the essential features from the candidate nodules for accurate classification. During the last decade, numerous medical image classification tasks have employed deep-learning techniques. Hinton introduced deep learning in 2006,[16] which is motivated by the working of human neural schema and designed by mimicking the intercommunication of several neurons. [Table 2] presents the overview of recent works carried out on pulmonary nodule classification using deep-learning techniques.{Table 2}

Pulmonary nodule classification is a sort of 3D image analysis problem, but most of the present deep models have utilized 2D details for building the convolutional neural networks (CNNs)[28] or multiview 2D CNN[27],[29] classifier model. Considering only 2D data might skip essential information required for malignancy determination. Hussein et al. classify the nodules based on the features extracted by 3D CNN model and fused with six more featured advised by radiologists.[30] Identifying such high-level nodule attributes based demands the knowledge of experienced radiologists. Zhu et al. have proposed a 3D deep model to classify lung nodules using gradient boosting machine with the features extracted for nodule classification.[17] Qi Dou et al.[26] have proposed a hierarchical 3D CNN to extract contextual features from candidate nodules at various hierarchical levels and filter the high-probability locations as true nodules. Unbalanced data distribution and scarcity problems of medical image dataset can be overcome by incorporating transfer learning technique while designing the classifier model.[24],[27]

Voxel-based machine learning (VML) is a kind of supervised learning technique used to segment the pulmonary nodules directly from the input image without selecting the candidate nodules.[31] For accurate lung nodule segmentation, the classifier requires both local details about nodule appearance and global contextual details about nodule location. In VML approach, the model is trained in a supervised manner directly from the volumetric features retrieved from voxel values of CT images. Tong et al.[32] have proposed a deep-learning-based pulmonary nodule segmentation algorithm. The algorithm segments the pulmonary nodules from the CT image directly using modified U-Net architecture, and the performance is evaluated with the Dice coefficient.

A number of research works have been carried out on medical image analysis with deep learning, but few works have been contributed for developing an effective lung cancer diagnosis system. Still, the CAD system for lung cancer requires improvement in detecting the cancer case without missing true pulmonary nodules. In this study, an enhanced SegNet model is proposed to segment the lung region and a modified CNN model is implemented to categorize the pulmonary lung nodules.

 Subjects and Methods



Dataset

Data acquisition is the preliminary act that acquires an input image for an effective diagnosis. CT scanners send radiation beam to the human body and produce a more detailed CT scan image. CT imaging modality produces the volume data in a Digital Imaging and Communications in Medicine (DICOM) directory where the data are neatly packed with consequent numbering. Commonly, the medical images are preserved in standard DICOM format that helps the physicians to access the images and allows them to diagnosis the disease. Normally, the three-dimensional (3D) CT data are viewed in 2D planes such as axial, sagittal, and coronal, which provide an in-depth look to the radiologists for effective diagnosis. Since the 3D CT images are complex with many anatomical structures, it is reasonable to have a 2D view for better understanding of humans. CT scans enable the physicians to detect lung nodules accurately rather than chest X-ray scans.

The National Cancer Institute has made a collaborative work known as the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative. LIDC- IDRI collection provides thoracic CT images and the marked lesions, and this stimulates the research progress on lung cancer diagnosis from CT images.[8] The LIDC is a collaborative effort of five educational foundations that are operating collectively to build an image archive that supports universal research for the innovation of the CAD system for lung nodule detection on CT scans. LIDC-IDRI dataset contains nearly a thousand of patient data in DICOM file. Each file includes a series with stacked of axial slices of the chest cavity. The amount of 2D slices for every patient depends on the scanner machine, which takes the scan. Commonly, the thickness of slice in an axial direction is more than 2.5 mm. The identified lesions are categorized into three classes such as nonnodules, nodules smaller than 3 mm, and nodules bigger than 3 mm. Every study in this collection includes thoracic CT scan images and the related eXtensible Markup Language file that specify the coordinates of the nodule and its label. The nodule annotations have been marked by four qualified radiologists.

Convolutional neural network

CNN is a backpropagation neural network that works on multidimensional data. The standard CNN model should have a pile of convolutional and pooling layers tailed by a fully connected layer and a final softmax layer. Piling the convolutional layers enables the model to explore the hidden features and pattern of the input image at hierarchical levels. The basic operation of the convolutional layer is a convolution that recognizes the spatial relationship among pixels. The hierarchical order of convolution filters may extract features directly from the raw input image at different levels. The convolutional filter is called kernel, and the kernel weights are learned during the model training. Pooling is a downsampling process which reduces the dimensionality of the feature map which is obtained from the previous convolutional layer without missing any important information. The fully connected layer consolidates the set of features obtained from multiple convolutional layers into a single feature. Finally, the softmax layer classifies the outputs using the softmax activation function.

Dilated SegNet

SegNet is a kind of CNN that consists of a symmetrical encoder and decoder part. The encoder comprises a sequence of convolutional and downsampling layers. The decoder has a sequence of deconvolutional and upsampling layers and ended with a softmax layer that does pixel-wise classification. SegNet model does not contain a fully connected layer; therefore, this is faster than other segmentation neural networks such as fully convolutional network and DeconvNet. The proposed dilated SegNet model for lung segmentation is illustrated in [Figure 3]. The dilated SegNet model contains an improved encoder that produces fused convolved feature sets extracted at different dilation rates. Dilation rate specifies the gaps between the kernels and fills the empty positions with zeroes. A 3 × 3 kernel with a dilation rate of 2 will have the wider field view of 5 × 5 kernel. The dilated convolutional operations help the segmentation CNN model to sustain minimum computation time even higher field views are used. The encoder part consists of 2 convolutional layers, and each of them is followed by max-pooling layers. At convolutional layer, 32 kernels of size 3 × 3 with dilation rate of 1 and another 32 kernels of size 3 × 3 with dilation rate of 2 are applied. The convolved features obtained by both dilated and nondilated convolutional layers are fused and forwarded to the next pooling layer. Max-pooling operation with 2 × 2 window size (nonoverlapping) and a stride of two is applied at pooling layer to downsample the feature set by skipping the redundant details.{Figure 3}

Convolution operation convolves two matrixes such as input image I and kernel filter H and produces the convolved matrix using Eq. 1, where * indicates the convolution operation.

C[x, y] = [I[x.y] * H[x, y] Eq. 1

Convolution is a process of summing each pixel I [i, j] of the image to its neighbors, weighted by the kernel filter H[x-I, y-j]

[INLINE:1]

Dilated convolution requires an additional parameter called dilation rate, which describes the gap between pixels. It enlarges the receptive field by introducing the gap between the cells in a kernel. [Figure 4] illustrates an example of convolution operation on 6 × 6 input data with 3 × 3 filter at different dilation rates.{Figure 4}

The decoder part contains deconvolutional layers, followed by upsampling layer. Upsampling operation helps the network to get back the original image dimension. In the proposed model, hidden convolution layers use ReLU activation function ReLU (x) = max (0, x) and the output layer uses softmax activation function. A softmax function is a type of squeezing function which confines the output vector into the range of 0–1. The softmax function takes an N-dimensional input vector with float values and produces another N-dimensional vector with real values in the range (0, 1) using Eq. 3.

Softmax (X): (x1, x2.......xn)→ S: (s1, s2....sn), where

[INLINE:2]

The objective function of the dilated SegNet aims to minimize the Dice loss that is calculated simply as:

Diceloss = 1 - DiceCoeff Eq. 4

Dice coefficient is computed using Eq. 5, where R is the segmented region mask image and G is the ground truth mask image.

[INLINE:3]

Convolutional neural network with batch normalization

The CNN is a backpropagation neural network that comprises a series of convolutional and pooling layers, followed by a final classification layer. The 2D CNN model with batch normalization (BN) is developed to classify the nodule patches with a size of 64 × 64. The learning process should not dilute the discriminant features between true and false nodule patches. Hence, a BN layer is attached after every convolutional layer to standardize data throughout the network. BN technique helps to avoid network overfitting problem and improve the stability of the network. BN layer is used before the activation layer that normalizes the input by applying a linear scale and shift to the mini-batch.[33] During training time, a BN layer calculates the batch mean μbatch and variance σbatch of the layer input X: (x1, x2…xm).

[INLINE:4]

[INLINE:5]

normalize the layer inputs using the calculated batch mean and variance. Output Y: (y1, y2…ym) is obtained by scaling and shifting the normalized inputs x–i with the learned parameters γ and β.

[INLINE:6]

[INLINE:7]

2D CNN framework for lung nodule patch classification is shown in [Figure 5]. The CNN model consists of convolutional layer, followed by BN and max-pooling layer. Finally, a softmax layer classifies the nodule patches using the features extracted by the network. Every convolution layer in the model uses increasing number of kernels such as 16, 32 and 64 with the size of 3 × 3 and ReLU activation function. ReLU activation is a nonlinear activation function which does not suppress the effect of backpropagation and helps faster convergence of the network. The deep neural network efficiently works on normalized data so that the network can converge steadily without oscillations. The BN layer controls the magnitude and mean of the activations independent of all other layers. Max-pooling layer helps to eliminate the redundant details by choosing the maximum value within the block of size 2 × 2. The softmax classifier implements classification by fitting the data classification boundaries, using gradient descent optimization technique. The softmax activation function maps the output vector into categorical probability vector{Figure 5}

 Discussion



Lung segmentation with dilated SegNet

Dilated SegNet is modified SegNet model for separating lung region from chest CT images. The model has been trained with a subset of 1000 2D axial images obtained from LIDC dataset. The center axial slice is obtained from each volumetric CT scan images, and the slice is rescaled into 512 × 512 resolutions. The network weights are initialized randomly and fixed during training by backpropagation method. The first-order gradient optimizer “Adam” is used for tuning the model. Dice coefficient loss is used as the cost function, and a fixed learning rate of 1 × 10 − 3 is set for all iterations. The segmented result of the dilated SegNet is compared with the segmented results obtained by Fuzzy C-means (FCM) clustering and SegNet models. [Figure 6] shows the segmentation results of all models. The output images confirm that the proposed dilated SegNet model could segment the lung region accurately than FCM and SegNet algorithms. From the results, it is apparent that the intensity of the lung and scanner regions is similar, but the lung region has a clear boundary and different texture details as compared with the surrounding region. This helps the CNN to learn the abstract level features from the raw image and segment lung region accurately.{Figure 6}

The performance of segmentation algorithms is quantitatively evaluated by the Dice coefficient that calculates the spatial overlay between the segmentation results and the ground truth results. In Dice coefficient measure, a value of one indicates the perfect spatial intersection between the ground truth result and the segmented result and a value of zero represents no spatial overlap. The performance of the dilated SegNet model in terms of Dice coefficient during network training is shown in [Figure 7]. The performance graph shows that the model has converged after 90 epochs. During the training phase, the dilated SegNet model attains the maximum Dice coefficient of 0.9745 with dice loss of 0.0255.{Figure 7}

The models have been tested with 50 images, and an average Dice coefficient and accuracy of various lung segmentation methods are shown in [Table 3]. The quantitative performance analysis states that dilated SegNet shows improved performance in terms of both Dice coefficient and accuracy as compared with FCM and SegNet models. The dilated convolution increases the receptive area without increasing the computation load and helps to learn global features. The proposed dilated SegNet model learns both the local and global features by the different receptive areas using dilated convolution operation. The features obtained by the dilated convolutional layers are combined, and the fused feature set obtained by dilated SegNet is used for segmenting the lung region. The experiment results confirm that the incorporation of global features enhances the performance of the SegNet.{Table 3}

Patch-level nodule classification with convolutional neural network + batch normalization

Based on the annotations given by the radiologists, the nodule patches are extracted from the LUNA16 data set. The 2D nodule patches in axial view with the dimension of 64 × 64 are sliced from the CT images. The number of true nodules is very low than false nodules; this imbalance dataset made the classifier bias toward the majority class. Data augmentation technique makes the dataset balanced by augmenting the minority class samples and also prevents the CNN model for overfitting issue. Data augmentation techniques such as rotation, horizontal flipping, and vertical flipping are adapted to augment the nodule patches. Random 5000 patches from each category are used to build the model. The CNN + BN model is trained from scratch, and the weights are adjusted at the learning rate of 0.001. The model is trained with Adamax optimizer and a weight decay of 1e-5 for 100 epochs. The metric accuracy is used as a cost function for tuning the network.

The performance of the CNN + BN model is qualitatively examined by visualizing the learned features. [Figure 8] shows the abstract feature maps learned at various convolutional layers. The essential features activated by the convolutional layers during the feedforward operation are highlighted. [Figure 9] shows the precisely classified nodule patches from the test dataset. This result confirms that the CNN model with BN could classify the true nodules that are smaller in size and complex structures with a probability of higher than 0.60.{Figure 8}{Figure 9}

The performance of the pulmonary nodule classification model is quantitatively assessed with accuracy and sensitivity. In medical image classification, true positive (TP) denotes the exact classification rate of positive units, and true negative denotes the exact classification rate of negative units. FP indicates the wrong classification rate of negative units and false negative (FN) refers to the wrong classification rate of positive units. The sensitivity (or recall) represents the ratio between TPs and TPs plus FNs. Higher accuracy and sensitivity indicate better classification performance. Confusion matrix results for CNN model with and without BN are shown in [Figure 10]. These results show that CNN + BN model has fewer FPs compared to CNN model without BN and maintains a high sensitivity of 94.8.{Figure 10}

[Table 4] presents the performance of CNN model with BN in comparison with CNN model without BN in nodule classification. BN layer is injected after every convolution layer in the CNN model to regulate the features derived by a convolutional layer. This feature normalization assures that the model could retain the required discriminant features across multiple iterations. The CNN + BN achieves the accuracy of 93.8, which is higher than the CNN model without BN. The test results confirm that the BN improves the efficiency of the CNN model by discovering generalized features for classifying the true and false nodules.{Table 4}

Both CNN models achieved good results in classifying the 2D nodule patches. Further, the impact of BN in CNN is examined using the principal component analysis method. Although the CNN model without BN classifies the nodule patches with satisfying accuracy, the discriminative capability of the features learned by the CNN model without BN is poor as compared with CNN with BN layer.

[Figure 11] illustrates the discriminant ability of the features learned by both CNN model with and without BN. The first two principal components obtained from the 256 features learned by CNN models are visualized with a scatter plot to analyze its discriminative ability. From this plot, it is observed that the CNN + BN model learns generalized features that help to discriminate true nodules from the false nodules precisely.{Figure 11}

 Conclusions



A perfect CAD system is required to avoid the unnecessarily repeated CT scans. The enhancement of CAD for lung cancer is the most required assignment in the current market scenario. In this study, deep-learning models such as dilated SegNet for lung segmentation and CNN model with BN layer for 2D nodule patch classification have been implemented for lung cancer detection. The obtained results of these proposed models demonstrate satisfied performance in the lung cancer diagnosis. The dilated SegNet shows the improved results of 0.89 ± 0.23 Dice coefficient as compared with FCM and SegNet models. Furthermore, the CNN with BN layer extracted features with the high discriminant ability and classifies the nodule patches with a sensitivity of 94.8. The visual results confirm that the CNN model with BN classifies the true nodules that are smaller in size and complex structures with a satisfied probability value. However, certain aspects still require attention in the development of CAD tools for lung cancer detection such as the inclusion of 3D data in lung parenchyma segmentation and nodule detection. It is recommended that the better utilization of 3D data along with deep-learning techniques may boost the performance of the current CAD system.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

References

1American Lung. Association and others, Lung Cancer Fact Sheet. Available from: https://www.lung.org/lung-health-diseases/lung-disease-lookup/lung-cancer/resource-library/lung-cancer-fact-sheet. [Last accessed on 2014 Aug 12].
2National Cancer Institute, NIH, DHHS, Bethesda, MD. Cancer Trends Progress Report. Available from: https://progressreport.cancer.gov/. [Last accessed on 2017 Jan].
3Nasim F, Sabath BF, Eapen GA. Lung Cancer. Med Clin North Am 2019;103:463-73.
4Wiener RS, Gould MK, Arenberg DA, Au DH, Fennig K, Lamb CR, et al. An official American Thoracic Society/American College of Chest Physicians policy statement: Implementation of low-dose computed tomography lung cancer screening programs in clinical practice. Am J Respir Crit Care Med 2015;192:881-91.
5Detterbeck FC, Mazzone PJ, Naidich DP, Bach PB. Screening for lung cancer: Diagnosis and management of lung cancer, 3rd ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest 2013;143:e78S-92S.
6Rao RB, Bi J, Fung G, Salganicoff M, Obuchowski N, Naidich D. LungCAD: A Clinically Approved, Machine Learning System for Lung Cancer Detection, in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2007. p. 1033-7.
7Vapnik V, Vashist A, Pavlovitch N. Learning Using Hidden Information (learning with teacher), in 2009 International Joint Conference on Neural Networks; 2009. p. 3188-95.
8Nie D, Wang L, Adeli E, Lao C, Lin W, Shen D. 3-D fully convolutional networks for multimodal isointense infant brain image segmentation. IEEE Trans Cybern 2019;49:1123-36.
9Christ PF, Elshaer MEA, Ettlinger F, Tatavarty S, Bickel M, Bilic P, et al. Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields,” in International Conference on Medical Image Computing and Computer-Assisted Intervention; 2016. p. 415-23.
10Dalmics MU, Gubern-Mérida A, Vreemann S, Karssemeijer N, Mann R, Platel B. A computer-aided diagnosis system for breast DCE-MRI at high spatiotemporal resolution. Med Phys 2016;43:84-94.
11Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 2017;39:2481-95.
12Khagi B, Kwon GR. Pixel-label-based segmentation of cross-sectional brain MRI using simplified SegNet architecture-based CNN. J Healthc Eng 2018;2018. https://doi.org/10.1155/2018/3640705.
13Harrison AP, Xu Z, George K, Lu L, Summers RM, Mollura DJ. Progressive and Multi-Path Holistically Nested Neural Networks for Pathological Lung Segmentation from CT Images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention; 2017. p. 621-9.
14Agnes SA, Anitha J, Peter JD. Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN). Neural Comput Appl 2018:1-11. https://doi.org/10.1007/s00521-018-3877-3.
15Skourt BA, El Hassani A, Majda A. Lung CT image segmentation using deep neural networks. Procedia Comput Sci 2018;127:109-13.
16Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006;313:504-7.
17Zhu W, Liu C, Fan W, Xie X. Deeplung: 3d Deep Convolutional Nets for Automated Pulmonary Nodule Detection and Classification. arXiv Prepr. arXiv1709.05538; 2017.
18Eun H, Kim D, Jung C, Kim C. Single-view 2D CNNs with fully automatic non-nodule categorization for false positive reduction in pulmonary nodule detection. Comput Methods Programs Biomed 2018;165:215-24.
19Hamidian S, Sahiner B, Petrick N, Pezeshk A. 3D convolutional neural network for automatic detection of lung nodules in chest CT. in Proc. SPIE, vol. 10134, Mar. 2017, Art. no. 1013409.
20Fu L, Ma J, Ren Y, Han YS, Zhao J. Automatic detection of lung nodules: False positive reduction using convolution neural networks and handcrafted features,” in Med Imaging 2017;10134:101340A.
21Li W, Cao P, Zhao D, Wang J. Pulmonary nodule classification with deep convolutional neural networks on computed tomography images. Comput Math Methods Med 2016;Article ID 6215085. https://doi.org/10.1155/2016/6215085.
22Setio AA, Ciompi F, Litjens G, Gerke P, Jacobs C, van Riel SJ, et al. Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks. IEEE Trans Med Imaging 2016;35:1160-9.
23Ypsilantis PP, Montana G. Recurrent Convolutional Networks for Pulmonary Nodule Detection in CT Imaging. arXiv Prepr. arXiv1609.09143; 2016.
24Ciompi F, de Hoop B, van Riel SJ, Chung K, Scholten ET, Oudkerk M, et al. Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box. Med Image Anal 2015;26:195-202.
25van Ginneken B, Setio AA, Jacobs C, Ciompi F. Off-the-Shelf Convolutional Neural Network Features for Pulmonary Nodule Detection in Computed Tomography Scans,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on; 2015. p. 286-9.
26Dou Q, Chen H, Yu L, Qin J, Heng PA. Multilevel contextual 3-D CNNs for false positive reduction in pulmonary nodule detection. IEEE Trans Biomed Eng 2017;64:1558-67.
27Nibali A, He Z, Wollersheim D. Pulmonary nodule classification with deep residual networks. Int J Comput Assist Radiol Surg 2017;12:1799-808.
28Shen W, Zhou M, Yang F, Yang C, Tian J. Multi-scale convolutional neural networks for lung nodule classification. Inf Process Med Imaging 2015;24:588-99.
29Liu X, Hou F, Qin H, Hao A. Multi-view multi-scale CNNs for lung nodule type classification from CT images. Pattern Recognit 2018;77:262-75.
30Hussein S, Cao K, Song Q, Bagci U. Risk Stratification of Lung Nodules Using 3d cnn-Based Multi-Task Learning,” in International Conference on Information Processing in Medical Imaging; 2017. p. 249-60.
31Gerard SE, Patton TJ, Christensen GE, Bayouth JE, Reinhardt JM. FissureNet: A Deep learning approach for pulmonary fissure detection in CT images. IEEE Trans Med Imaging 2019;38:156-66.
32Tong G, Li Y, Chen H, Zhang Q, Jiang H. Improved U-NET network for pulmonary nodules segmentation. Optik (Stuttg) 2018;174:460-9.
33Ioffe S, Szegedy C. Batch normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv Prepr. arXiv1502.03167; 2015.