Rushkamp2381

Z Iurium Wiki

Verze z 20. 8. 2024, 16:51, kterou vytvořil Rushkamp2381 (diskuse | příspěvky) (Založena nová stránka s textem „ed on large, heterogeneous, and partially incomplete datasets. Sparsified training may boost the performance of a smaller model based on public and site-sp…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

ed on large, heterogeneous, and partially incomplete datasets. Sparsified training may boost the performance of a smaller model based on public and site-specific data.Supplemental material is available for this article.Published under a CC BY 4.0 license.

To develop a deep learning approach to bone age assessment based on a training set of developmentally normal pediatric hand radiographs and to compare this approach with automated and manual bone age assessment methods based on Greulich and Pyle (GP).

In this retrospective study, a convolutional neural network (trauma hand radiograph-trained deep learning bone age assessment method [TDL-BAAM]) was trained on 15 129 frontal view pediatric trauma hand radiographs obtained between December 14, 2009, and May 31, 2017, from Children's Hospital of New York, to predict chronological age. A total of 214 trauma hand radiographs from Hasbro Children's Hospital were used as an independent test set. The test set was rated by the TDL-BAAM model as well as a GP-based deep learning model (GPDL-BAAM) and two pediatric radiologists (radiologists 1 and 2) using the GP method. All ratings were compared with chronological age using mean absolute error (MAE), and standard concordance analyses were performed.

The MAE of the rained on pediatric trauma hand radiographs is on par with automated and manual GP-based methods for bone age assessment and provides a foundation for developing population-specific deep learning algorithms for bone age assessment in modern pediatric populations.Supplemental material is available for this article.© RSNA, 2020See also the commentary by Halabi in this issue.

To quantitatively evaluate the generalizability of a deep learning segmentation tool to MRI data from scanners of different MRI manufacturers and to improve the cross-manufacturer performance by using a manufacturer-adaptation strategy.

This retrospective study included 150 cine MRI datasets from three MRI manufacturers, acquired between 2017 and 2018 (

= 50 for manufacturer 1, manufacturer 2, and manufacturer 3). Three convolutional neural networks (CNNs) were trained to segment the left ventricle (LV), using datasets exclusively from images from a single manufacturer. A generative adversarial network (GAN) was trained to adapt the input image before segmentation. The LV segmentation performance, end-diastolic volume (EDV), end-systolic volume (ESV), LV mass, and LV ejection fraction (LVEF) were evaluated before and after manufacturer adaptation. read more Paired Wilcoxon signed rank tests were performed.

The segmentation CNNs exhibited a significant performance drop when applied to datasets from different man MRI manufacturer may not generalize well to datasets from other manufacturers. The proposed manufacturer adaptation can largely improve the generalizability of a deep learning segmentation tool without additional annotation.Supplemental material is available for this article.© RSNA, 2020.

To implement and test a deep learning approach for the segmentation of the arterial and venous cerebral vasculature with four-dimensional (4D) CT angiography.

Patients who had undergone 4D CT angiography for the suspicion of acute ischemic stroke were retrospectively identified. A total of 390 patients evaluated in 2014 (

= 113) or 2018 (

= 277) were included in this study, with each patient having undergone one 4D CT angiographic scan. One hundred patients from 2014 were randomly selected, and the arteries and veins on their CT scans were manually annotated by five experienced observers. The weighted temporal average and weighted temporal variance from 4D CT angiography were used as input for a three-dimensional Dense-U-Net. The network was trained with the fully annotated cerebral vessel artery-vein maps from 60 patients. Forty patients were used for quantitative evaluation. The relative absolute volume difference and the Dice similarity coefficient are reported. The neural network segmentations from 277 patients who underwent scanning in 2018 were qualitatively evaluated by an experienced neuroradiologist using a five-point scale.

The average time for processing arterial and venous cerebral vasculature with the network was less than 90 seconds. The mean Dice similarity coefficient in the test set was 0.80 ± 0.04 (standard deviation) for the arteries and 0.88 ± 0.03 for the veins. The mean relative absolute volume difference was 7.3% ± 5.7 for the arteries and 8.5% ± 4.8 for the veins. Most of the segmentations (

= 273, 99.3%) were rated as very good to perfect.

The proposed convolutional neural network enables accurate artery and vein segmentation with 4D CT angiography with a processing time of less than 90 seconds.© RSNA, 2020.

The proposed convolutional neural network enables accurate artery and vein segmentation with 4D CT angiography with a processing time of less than 90 seconds.© RSNA, 2020.Published under a CC BY 4.0 license. Supplemental material is available for this article.

To develop a fully automated algorithm for spleen segmentation and to assess the performance of this algorithm in a large dataset.

In this retrospective study, a three-dimensional deep learning network was developed to segment the spleen on thorax-abdomen CT scans. Scans were extracted from patients undergoing oncologic treatment from 2014 to 2017. A total of 1100 scans from 1100 patients were used in this study, and 400 were selected for development of the algorithm. For testing, a dataset of 50 scans was annotated to assess the segmentation accuracy and was compared against the splenic index equation. In a qualitative observer experiment, an enriched set of 100 scan-pairs was used to evaluate whether the algorithm could aid a radiologist in assessing splenic volume change. The reference standard was set by the consensus of two other independent radiologists. A Mann-Whitney

test was conducted to test whether there was a performance difference between the algorithm and the independent observer.

The algorithm and the independent observer obtained comparable Dice scores (

= .834) on the test set of 50 scans of 0.962 and 0.964, respectively. The radiologist had an agreement with the reference standard in 81% (81 of 100) of the cases after a visual classification of volume change, which increased to 92% (92 of 100) when aided by the algorithm.

A segmentation method based on deep learning can accurately segment the spleen on CT scans and may help radiologists to detect abnormal splenic volumes and splenic volume changes.

© RSNA, 2020.

A segmentation method based on deep learning can accurately segment the spleen on CT scans and may help radiologists to detect abnormal splenic volumes and splenic volume changes.Supplemental material is available for this article.© RSNA, 2020.

To develop a deep learning algorithm for the automatic assessment of the extent of systemic sclerosis (SSc)-related interstitial lung disease (ILD) on chest CT images.

This retrospective study included 208 patients with SSc (median age, 57 years; 167 women) evaluated between January 2009 and October 2017. A multicomponent deep neural network (AtlasNet) was trained on 6888 fully annotated CT images (80% for training and 20% for validation) from 17 patients with no, mild, or severe lung disease. The model was tested on a dataset of 400 images from another 20 patients, independently partially annotated by three radiologist readers. The ILD contours from the three readers and the deep learning neural network were compared by using the Dice similarity coefficient (DSC). The correlation between disease extent obtained from the deep learning algorithm and that obtained by using pulmonary function tests (PFTs) was then evaluated in the remaining 171 patients and in an external validation dataset of 31 patients bad with pulmonary function to assess CT images from patients with SSc-related ILD.

© RSNA, 2020.

The developed algorithm performed similarly to radiologists for disease-extent contouring, which correlated with pulmonary function to assess CT images from patients with SSc-related ILD.Supplemental material is available for this article.© RSNA, 2020.A simple classifier in TensorFlow (version 2) is developed and how to use TensorBoard to monitor training progress, to recognize overfitting, and to display other useful information like images in the training set and the confusion matrix is demonstrated.This dataset is composed of annotations of the five hemorrhage subtypes (subarachnoid, intraventricular, subdural, epidural, and intraparenchymal hemorrhage) typically encountered at brain CT.

To develop and characterize an algorithm that mimics human expert visual assessment to quantitatively determine the quality of three-dimensional (3D) whole-heart MR images.

In this study, 3D whole-heart cardiac MRI scans from 424 participants (average age, 57 years ± 18 [standard deviation]; 66.5% men) were used to generate an image quality assessment algorithm. A deep convolutional neural network for image quality assessment (IQ-DCNN) was designed, trained, optimized, and cross-validated on a clinical database of 324 (training set) scans. On a separate test set (100 scans), two hypotheses were tested

that the algorithm can assess image quality in concordance with human expert assessment as assessed by human-machine correlation and intra- and interobserver agreement and

that the IQ-DCNN algorithm may be used to monitor a compressed sensing reconstruction process where image quality progressively improves. Weighted κ values, agreement and disagreement counts, and Krippendorff α reliability coefficienthis article.© RSNA, 2020.Past technology transition successes and failures have demonstrated the importance of user-centered design and the science of human factors; these approaches will be critical to the success of artificial intelligence in radiology.

To assess the contribution of a generative adversarial network (GAN) to improve intermanufacturer reproducibility of radiomic features (RFs).

The authors retrospectively developed a cycle-GAN to translate texture information from chest radiographs acquired using one manufacturer (Siemens) to chest radiographs acquired using another (Philips), producing fake chest radiographs with different textures. The authors prospectively evaluated the ability of this texture-translation cycle-GAN to reduce the intermanufacturer variability of RFs extracted from the lung parenchyma. This study assessed the cycle-GAN's ability to fool several machine learning (ML) classifiers tasked with recognizing the manufacturer on the basis of chest radiography inputs. The authors also evaluated the cycle-GAN's ability to mislead radiologists who were asked to perform the same recognition task. Finally, the authors tested whether the cycle-GAN had an impact on radiomic diagnostic accuracy for chest radiography in patients with congRSNA, 2020See also the commentary by Alderson in this issue.

Both ML classifiers and radiologists had difficulty recognizing the chest radiographs' manufacturer. The cycle-GAN improved RF intermanufacturer reproducibility and discriminative power for identifying patients with CHF. This deep learning approach may help counteract the sensitivity of RFs to differences in acquisition.Supplemental material is available for this article.© RSNA, 2020See also the commentary by Alderson in this issue.

Autoři článku: Rushkamp2381 (Jansen Roy)