Qvistmcdonald5625

Z Iurium Wiki

identify the patients who need more aggressive treatment in clinical practice, pending further validation with larger prospective cohorts.Supplemental material is available for this article.© RSNA, 2020.

To evaluate the benefits of an artificial intelligence (AI)-based tool for two-dimensional mammography in the breast cancer detection process.

In this multireader, multicase retrospective study, 14 radiologists assessed a dataset of 240 digital mammography images, acquired between 2013 and 2016, using a counterbalance design in which half of the dataset was read without AI and the other half with the help of AI during a first session and vice versa during a second session, which was separated from the first by a washout period. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were assessed as endpoints.

The average AUC across readers was 0.769 (95% CI 0.724, 0.814) without AI and 0.797 (95% CI 0.754, 0.840) with AI. The average difference in AUC was 0.028 (95% CI 0.002, 0.055,

= .035). Average sensitivity was increased by 0.033 when using AI support (

= .021). Reading time changed dependently to the AI-tool score. For low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For higher likelihood of malignancy, the reading time was on average increased with the use of AI.

This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.

© RSNA, 2020.

This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.Supplemental material is available for this article.© RSNA, 2020.

To evaluate publicly available de-identification tools on a large corpus of narrative-text radiology reports.

In this retrospective study, 21 categories of protected health information (PHI) in 2503 radiology reports were annotated from a large multihospital academic health system, collected between January 1, 2012 and January 8, 2019. A subset consisting of 1023 reports served as a test set; the remainder were used as domain-specific training data. The types and frequencies of PHI present within the reports were tallied. G Protein agonist Five public de-identification tools were evaluated MITRE Identification Scrubber Toolkit, U.S. National Library of Medicine‒Scrubber, Massachusetts Institute of Technology de-identification software, Emory Health Information DE-identification (HIDE) software, and Neuro named-entity recognition (NeuroNER). The tools were compared using metrics including recall, precision, and F1 score (the harmonic mean of recall and precision) for each category of PHI.

The annotators identified 3528 spieved limited performance on the corpus of radiology reports, suggesting the need for further advancements in public datasets and trained models.Supplemental material is available for this article.See also the commentary by Tenenholtz and Wood in this issue.© RSNA, 2020.

To automatically detect lymph nodes involved in lymphoma on fluorine 18 (

F) fluorodeoxyglucose (FDG) PET/CT images using convolutional neural networks (CNNs).

In this retrospective study, baseline disease of 90 patients with lymphoma was segmented on

F-FDG PET/CT images (acquired between 2005 and 2011) by a nuclear medicine physician. An ensemble of three-dimensional patch-based, multiresolution pathway CNNs was trained using fivefold cross-validation. Performance was assessed using the true-positive rate (TPR) and number of false-positive (FP) findings. CNN performance was compared with agreement between physicians by comparing the annotations of a second nuclear medicine physician to the first reader in 20 of the patients. Patient TPR was compared using Wilcoxon signed rank tests.

Across all 90 patients, a range of 0-61 nodes per patient was detected. At an average of four FP findings per patient, the method achieved a TPR of 85% (923 of 1087 nodes). Performance varied widely across patients (TPR range, 33%-100%; FP range, 0-21 findings). In the 20 patients labeled by both physicians, a range of 1-49 nodes per patient was detected and labeled. The second reader identified 96% (210 of 219) of nodes with an additional 3.7 per patient compared with the first reader. In the same 20 patients, the CNN achieved a 90% (197 of 219) TPR at 3.7 FP findings per patient.

An ensemble of three-dimensional CNNs detected lymph nodes at a performance nearly comparable to differences between two physicians' annotations. This preliminary study is a first step toward automated PET/CT assessment for lymphoma.© RSNA, 2020.

An ensemble of three-dimensional CNNs detected lymph nodes at a performance nearly comparable to differences between two physicians' annotations. This preliminary study is a first step toward automated PET/CT assessment for lymphoma.© RSNA, 2020.

To develop and validate a deep learning (DL) algorithm to identify poor-quality lateral airway radiographs.

A total of 1200 lateral airway radiographs obtained in emergency department patients between January 1, 2000, and July 1, 2019, were retrospectively queried from the picture archiving and communication system. Two radiologists classified each radiograph as adequate or inadequate. Disagreements were adjudicated by a third radiologist. The radiographs were used to train and test the DL classifiers. Three technologists and three different radiologists classified the images in the test dataset, and their performance was compared with that of the DL classifiers.

The training set had 961 radiographs and the test set had 239. The best DL classifier (ResNet-50) achieved sensitivity, specificity, and area under the receiver operating characteristic curve of 0.90 (95% confidence interval [CI] 0.86, 0.94), 0.82 (95% CI 0.76, 0.90), and 0.86 (95% CI 0.81, 0.91), respectively. Interrater agreement for technologists was fair (Fleiss κ, 0.

Autoři článku: Qvistmcdonald5625 (Andersen Olsson)