Lehmannjohansson3147

Z Iurium Wiki

AD is the highly severe part of the dementia spectrum and impairs cognitive abilities of individuals, bringing economic, societal and psychological burdens beyond the diseased. A promising approach in AD research is the analysis of structural and functional brain connectomes, i.e. sNETs and fNETs, respectively. We propose to use tensor representation (B-tensor) of uni-modal and multi-modal brain connectomes to define a low-dimensional space via tensor factorization. We show on a cohort of 47 subjects, spanning the spectrum of dementia, that diagnosis with an accuracy of 77% to 100% is achievable in a 5D connectome space using different structural and functional connectome constructions in a uni-modal and multi-modal fashion. We further show that multi-modal tensor factorization improves the results suggesting complementary information in structure and function. A neurological assessment of the connectivity patterns identified largely agrees with prior knowledge, yet also suggests new associations that may play a role in the disease progress.Pancreas identification and segmentation is an essential task in the diagnosis and prognosis of pancreas disease. Although deep neural networks have been widely applied in abdominal organ segmentation, it is still challenging for small organs (e.g. pancreas) that present low contrast, highly flexible anatomical structure and relatively small region. In recent years, coarse-to-fine methods have improved pancreas segmentation accuracy by using coarse predictions in the fine stage, but only object location is utilized and rich image context is neglected. In this paper, we propose a novel distance-based saliency-aware model, namely DSD-ASPP-Net, to fully use coarse segmentation to highlight the pancreas feature and boost accuracy in the fine segmentation stage. Specifically, a DenseASPP (Dense Atrous Spatial Pyramid Pooling) model is trained to learn the pancreas location and probability map, which is then transformed into saliency map through geodesic distance-based saliency transformation. In the fine stage, saliency-aware modules that combine saliency map and image context are introduced into DenseASPP to develop the DSD-ASPP-Net. The architecture of DenseASPP brings multi-scale feature representation and achieves larger receptive field in a denser way, which overcomes the difficulties brought by variable object sizes and locations. Our method was evaluated on both public NIH pancreas dataset and local hospital dataset, and achieved an average Dice-Srensen Coefficient (DSC) value of 85.49 4.77% on the NIH dataset, outperforming former coarse-to-fine methods.The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries. With the continuous growth of new infections, developing automated tools for COVID-19 identification with CT image is highly desired to assist the clinical diagnosis and reduce the tedious workload of image interpretation. To enlarge the datasets for developing machine learning methods, it is essentially helpful to aggregate the cases from different medical systems for learning robust and generalizable models. This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets with distribution discrepancy. We build a powerful backbone by redesigning the recently proposed COVID-Net in aspects of network architecture and learning strategy to improve the prediction accuracy and learning efficiency. On top of our improved backbone, we further explicitly tackle the cross-site domain shift by conducting separate feature normalization in latent space. Moreover, we propose to use a contrastive training objective to enhance the domain invariance of semantic embeddings for boosting the classification performance on each dataset. We develop and evaluate our method with two public large-scale COVID-19 diagnosis datasets made up of CT images. Liraglutide clinical trial Extensive experiments show that our approach consistently improves the performanceson both datasets, outperforming the original COVID-Net trained on each dataset by 12.16% and 14.23% in AUC respectively, also exceeding existing state-of-the-art multi-site learning methods.Attention is an increasingly popular mechanism used in a wide range of neural architectures. The mechanism itself has been realized in a variety of formats. However, because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures in natural language processing, with a focus on those designed to work with vector representations of the textual data. We propose a taxonomy of attention models according to four dimensions the representation of the input, the compatibility function, the distribution function, and the multiplicity of the input and/or output. We present the examples of how prior information can be exploited in attention models and discuss ongoing research efforts and open challenges in the area, providing the first extensive categorization of the vast body of literature in this exciting domain.In the field of computer vision, without sufficient labeled images, it is challenging to train an accurate model. However, through visual adaptation from source to target domains, a relevant labeled dataset can help solve such problem. Many methods apply adversarial learning to diminish cross-domain distribution difference. They are able to greatly enhance the performance on target classification tasks. Generative adversarial network (GAN) loss is widely used in adversarial adaptation learning methods to reduce an across-domain distribution difference. However, it becomes difficult to decline such distribution difference if generator or discriminator in GAN fails to work as expected and degrades its performance. To solve such cross-domain classification problems, we put forward a novel adaptation framework called generative adversarial distribution matching (GADM). In GADM, we improve the objective function by taking cross-domain discrepancy distance into consideration and further minimize the difference through the competition between a generator and discriminator, thereby greatly decreasing cross-domain distribution difference.

Autoři článku: Lehmannjohansson3147 (Tarp Duffy)