Kramerhead2326

Z Iurium Wiki

Segmentation of COVID-19 infection in the lung tissue and its quantification in individual lobes is pivotal to understanding the disease's effect. It helps to determine the disease progression and gauge the extent of medical support required. Automation of this process is challenging due to the lack of a standardized dataset with voxel-wise annotations of the lung field, lobes, and infections like ground-glass opacity (GGO) and consolidation. However, multiple datasets have been found to contain one or more classes of the required annotations. Typical deep learning-based solutions overcome such challenges by training neural networks under adversarial and multi-task constraints. We propose to train a convolutional neural network to solve the challenge while it learns from multiple data sources, each of which is annotated for only a few classes. We have experimentally verified our approach by training the model on three publicly available datasets and evaluating its ability to segment the lung field, lobes and COVID-19 infected regions. Additionally, eight scans that previously had annotations for infection and lung have been annotated for lobes. Our model quantifies infection per lobe in these scans with an average error of 4.5%.Assessing the upper airway (UA) of obstructive sleep apnea patients using drug-induced sleep endoscopy (DISE) before potential surgery is standard practice in clinics to determine the location of UA collapse. According to the VOTE classification system, UA collapse can occur at the velum (V), oropharynx (O), tongue (T), and/or epiglottis (E). Analyzing DISE videos is not trivial due to anatomical variation, simultaneous UA collapse in several locations, and video distortion caused by mucus or saliva. The first step towards automated analysis of DISE videos is to determine which UA region the endoscope is in at any time throughout the video V (velum) or OTE (oropharynx, tongue, or epiglottis). An additional class denoted X is introduced for times when the video is distorted to an extent where it is impossible to determine the region. This paper is a proof of concept for classifying UA regions using 24 annotated DISE videos. We propose a convolutional recurrent neural network using a ResNet18 architecture combined with a two-layer bidirectional long short-term memory network. The classifications were performed on a sequence of 5 seconds of video at a time. The network achieved an overall accuracy of 82% and F1-score of 79% for the three-class problem, showing potential for recognition of regions across patients despite anatomical variation. Results indicate that large-scale training on videos can be used to further predict the location(s), type(s), and degree(s) of UA collapse, showing potential for derivation of automatic diagnoses from DISE videos eventually.A novel method for measuring the output impedance of current sources in an EIT system is implemented and tested. The paper shows that the proposed method can be used at the time of operation while the load is attached to the EIT system. the results also show that performance of the system improves when the shunt impedance values from the proposed technique are used to set the adaptive sources as opposed to the shunt impedance values acquired through open circuit measurements.We present a framework for identifying subspaces in the brain that are associated with changes in biological and cognitive indicators for a given disorder. By employing a method called active subspace learning (ASL) on structural MRI features from an Alzheimer's disease dataset, we identify subsets of regions that form co-varying subspaces in association with biological age and mini-mental state exam (MMSE) scores. Features generated by projecting structural MRI components onto these subspaces performed equally well on regression tasks when compared to non-transformed features as well as PCA-based transformations. Thus, without compromising on predictive performance, we present a way to extract sparse subspaces in the brain which are associated with a particular disorder but inferred only from the neuroimaging data along with relevant biological and cognitive test measures.Clinical relevance-This work provides a way to identify active structural subspaces in the brain, i.e. subsets of brain regions which collectively change the most, in association with changes in the indicators of a given disorder.Ultrasound imaging is commonly used for diagnosing breast cancers since it is non-invasive and inexpensive. Breast ultrasound (BUS) image classification is still a challenging task due to the poor image quality and lack of public datasets. In this paper, we propose novel Neutrosophic Gaussian Mixture Models (NGMMs) to more accurately classify BUS images. Specifically, we first employ a Deep Neural Network (DNN) to extract features from BUS images and apply principal component analysis to condense extracted features. We then adopt neutrosophic logic to compute three probability functions to estimate the truth, indeterminacy, and falsity of an image and design a new likelihood function by using the neutrosophic logic components. Finally, we propose an improved Expectation Maximization (EM) algorithm to incorporate neutrosophic logic to reduce the weights of images with high indeterminacy and falsity when estimating parameters of each NGMM to better fit these images to Gaussian distributions. We compare the performance of the proposed NGMMs, its two peer GMMs, and three DNN-based methods in terms of six metrics on a new dataset combining two public datasets. Our experimental results show that NGMMs achieve the highest classification results for all metrics.An Automatic deep learning semantic segmentation (ADLS) using DeepLab-v3-plus technique is proposed for a full and accurate whole heart Epicardial adipose tissue (EAT) segmentation from non-contrast cardiac CT scan. The ADLS algorithm was trained on manual segmented scans of the enclosed region of the pericardium (sac), which represents the internal heart tissues where the EAT is located. A level of 40 Hounsfield unit (HU) and a window of 350 HU was applied to every axial slice for contrast enhancement. Each slice was associated with two additional consecutive slices, representing the three-channel single input image of the deep network. The detected output mask region, as a post-step, was thresholded between [-190, -30] HU to detect the EAT region. A median filter with kernel size 3mm was applied to remove the noise. Using 70 CT scans (50 training/20 testing), the ADLS showed excellent results compared to manual segmentation (ground truth). The total average Dice score was (89.31%±1.96) with a high correlation of (R=97.15%, p-value less then 0.001), while the average error of EAT volume was (0.79±9.21).Clinical Relevance- Epicardial adipose tissue (EAT) volume aids in predicting atherosclerosis development and is linked to major adverse cardiac events. However, accurate manual segmentation is considered tedious work and requires skilled expertise.Individuals with obesity have larger amounts of visceral (VAT) and subcutaneous adipose tissue (SAT) in their body, increasing the risk for cardiometabolic diseases. The reference standard to quantify SAT and VAT uses manual annotations of magnetic resonance images (MRI), which requires expert knowledge and is time-consuming. Although there have been studies investigating deep learning-based methods for automated SAT and VAT segmentation, the performance for VAT remains suboptimal (Dice scores of 0.43 to 0.89). Previous work had key limitations of not fully considering the multi-contrast information from MRI and the 3D anatomical context, which are critical for addressing the complex spatially varying structure of VAT. An additional challenge is the imbalance between the number and distribution of pixels representing SAT/VAT. This work proposes a network based on 3D U-Net that utilizes the full field-of-view volumetric T1-weighted, water, and fat images from dual-echo Dixon MRI as the multi-channel input to automatically segment SAT and VAT in adults with overweight/obesity. In addition, this work extends the 3D U-Net to a new Attention-based Competitive Dense 3D U-Net (ACD 3D U-Net) trained with a class frequency-balancing Dice loss (FBDL). In an initial testing dataset, the proposed 3D U-Net and ACD 3D U-Net with FBDL achieved 3D Dice scores of (mean ± standard deviation) 0.99 ±0.01 and 0.99±0.01 for SAT, and 0.95±0.04 and 0.96 ±0.04 for VAT, respectively, compared to manual annotations. The proposed 3D networks had rapid inference time ( less then 60 ms/slice) and can enable automated segmentation of SAT and VAT.Clinical relevance- This work developed 3D neural networks to automatically, accurately, and rapidly segment visceral and subcutaneous adipose tissue on MRI, which can help to characterize the risk for cardiometabolic diseases such as diabetes, elevated glucose levels, and hypertension.In this study, we introduce a method to perform independent vector analysis (IVA) fusion to estimate linked independent sources and apply to a large multimodal dataset of over 3000 subjects in the UK Biobank study, including structural (gray matter), diffusion (fractional anisotropy), and functional (amplitude of low frequency fluctuations) magnetic resonance imaging data from each subject. The approach reveals a number of linked sources showing significant and meaningful covariation with subject phenotypes. One such mode shows significant linear association with age across all three modalities. Robust age-associated reductions in gray matter density were observed in thalamus, caudate, and insular regions, as well as visual and cingulate regions, with covarying reductions of fractional anisotropy in the periventricular region, in addition to reductions in amplitude of low frequency fluctuations in visual and parietal regions. Another mode identified multimodal patterns that differentiated subjects in their time-to-recall during a prospective memory test. In sum, the proposed IVA-based approach provides a flexible, interpretable, and powerful approach for revealing links between multimodal neuroimaging data.Melanoma classification plays an important role in skin lesion diagnosis. Vismodegib price Nevertheless, melanoma classification is a challenging task, due to the appearance variation of the skin lesions, and the interference of the noises from dermoscopic imaging. In this paper, we propose a multi-level attentive skin lesion learning (MASLL) network to enhance melanoma classification. Specifically, we design a local learning branch with a skin lesion localization (SLL) module to assist the network in learning the lesion features from the region of interest. In addition, we propose a weighted feature integration (WFI) module to fuse the lesion information from the global and local branches, which further enhances the feature discriminative capability of the skin lesions. Experimental results on ISIC 2017 dataset show the effectiveness of the proposed method on melanoma classification.Time-of-flight (TOF) magnetic resonance angiography is a non-invasive imaging modality for the diagnosis of intracranial atherosclerotic diseases (ICAD). Evaluation of the degree of the stenosis and status of posterior and anterior communicating arteries to supply enough blood flow to the distal arteries is very critical, which requires accurate evaluation of arteries. Recently, deep-learning methods have been firmly established as a robust tool in medical image segmentation, which has been resulted in developing multiple customized algorithms. For instance, BRAVE-NET, a context-based successor of U-Net-has shown promising results in MRA cerebrovascular segmentation. Another widely used context-based 3D CNN-DeepMedic-has been shown to outperform U-Net in cerebrovascular segmentation of 3D digital subtraction angiography. In this study, we aim to train and compare the two state-of-the-art deep-learning networks, BRAVE-NET and DeepMedic, for automated and reliable brain vessel segmentation from TOF-MRA images in ICAD patients.

Autoři článku: Kramerhead2326 (Vognsen Fitzpatrick)