Stephansengarrett9631
Brain tissue segmentation from multimodal MRI is a key building block of many neuroimaging analysis pipelines. Established tissue segmentation approaches have, however, not been developed to cope with large anatomical changes resulting from pathology, such as white matter lesions or tumours, and often fail in these cases. In the meantime, with the advent of deep neural networks (DNNs), segmentation of brain lesions has matured significantly. However, few existing approaches allow for the joint segmentation of normal tissue and brain lesions. Developing a DNN for such a joint task is currently hampered by the fact that annotated datasets typically address only one specific task and rely on task-specific imaging protocols including a task-specific set of imaging modalities. In this work, we propose a novel approach to build a joint tissue and lesion segmentation model from aggregated task-specific hetero-modal domain-shifted and partially-annotated datasets. Starting from a variational formulation of the joint problem, we show how the expected risk can be decomposed and optimised empirically. We exploit an upper bound of the risk to deal with heterogeneous imaging modalities across datasets. To deal with potential domain shift, we integrated and tested three conventional techniques based on data augmentation, adversarial learning and pseudo-healthy generation. For each individual task, our joint approach reaches comparable performance to task-specific and fully-supervised models. The proposed framework is assessed on two different types of brain lesions White matter lesions and gliomas. In the latter case, lacking a joint ground-truth for quantitative assessment purposes, we propose and use a novel clinically-relevant qualitative assessment methodology.Classification of digital pathology images is imperative in cancer diagnosis and prognosis. Recent advancements in deep learning and computer vision have greatly benefited the pathology workflow by developing automated solutions for classification tasks. However, the cost and time for acquiring high quality task-specific large annotated training data are subject to intra- and inter-observer variability, thus challenging the adoption of such tools. To address these challenges, we propose a classification framework via co-representation learning to maximize the learning capability of deep neural networks while using a reduced amount of training data. The framework captures the class-label information and the local spatial distribution information by jointly optimizing a categorical cross-entropy objective and a deep metric learning objective respectively. A deep metric learning objective is incorporated to enhance the classification, especially in the low training data regime. Further, a neighborhood-aware multiple similarity sampling strategy, and a soft-multi-pair objective that optimizes interactions between multiple informative sample pairs, is proposed to accelerate deep metric learning. We evaluate the proposed framework on five benchmark datasets from three digital pathology tasks, i.e., nuclei classification, mitosis detection, and tissue type classification. For all the datasets, our framework achieves state-of-the-art performance when using approximately only 50% of the training data. On using complete training data, the proposed framework outperforms the state-of-the-art on all the five datasets.Brain connectivity networks, derived from magnetic resonance imaging (MRI), non-invasively quantify the relationship in function, structure, and morphology between two brain regions of interest (ROIs) and give insights into gender-related connectional differences. However, to the best of our knowledge, studies on gender differences in brain connectivity were limited to investigating pairwise (i.e., low-order) relationships across ROIs, overlooking the complex high-order interconnectedness of the brain as a network. https://www.selleckchem.com/products/tunicamycin.html A few recent works on neurological disorders addressed this limitation by introducing the brain multiplex which is composed of a source network intra-layer, a target intra-layer, and a convolutional interlayer capturing the high-level relationship between both intra-layers. However, brain multiplexes are built from at least two different brain networks hindering their application to connectomic datasets with single brain networks (e.g., functional networks). To fill this gap, we propose Adversarial Brain Multiplex Translator (ABMT), the first work for predicting brain multiplexes from a source network using geometric adversarial learning to investigate gender differences in the human brain. Our framework comprises (i) a geometric source to target network translator mimicking a U-Net architecture with skip connections, (ii) a conditional discriminator which distinguishes between predicted and ground truth target intra-layers, and finally (iii) a multi-layer perceptron (MLP) classifier which supervises the prediction of the target multiplex using the subject class label (e.g., gender). Our experiments on a large dataset demonstrated that predicted multiplexes significantly boost gender classification accuracy compared with source networks and unprecedentedly identify both low and high-order gender-specific brain multiplex connections. Our ABMT source code is available on GitHub at https//github.com/basiralab/ABMT.Automatic and accurate esophageal lesion classification and segmentation is of great significance to clinically estimate the lesion statuses of the esophageal diseases and make suitable diagnostic schemes. Due to individual variations and visual similarities of lesions in shapes, colors, and textures, current clinical methods remain subject to potential high-risk and time-consumption issues. In this paper, we propose an Esophageal Lesion Network (ELNet) for automatic esophageal lesion classification and segmentation using deep convolutional neural networks (DCNNs). The underlying method automatically integrates dual-view contextual lesion information to extract global features and local features for esophageal lesion classification and lesion-specific segmentation network is proposed for automatic esophageal lesion annotation at pixel level. For the established clinical large-scale database of 1051 white-light endoscopic images, ten-fold cross-validation is used in method validation. Experiment results show that the proposed framework achieves classification with sensitivity of 0.