Haslundgilliam8607

Z Iurium Wiki

Verze z 9. 11. 2024, 22:11, kterou vytvořil Haslundgilliam8607 (diskuse | příspěvky) (Založena nová stránka s textem „We report about the application of state-of-the-art deep learning techniques to the automatic and interpretable assignment of ICD-O3 topography and morphol…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

We report about the application of state-of-the-art deep learning techniques to the automatic and interpretable assignment of ICD-O3 topography and morphology codes to free-text cancer reports. We present results on a large dataset (more than 80 000 labeled and 1 500 000 unlabeled anonymized reports written in Italian and collected from hospitals in Tuscany over more than a decade) and with a large number of classes (134 morphological classes and 61 topographical classes). We compare alternative architectures in terms of prediction accuracy and interpretability and show that our best model achieves a multiclass accuracy of 90.3% on topography site assignment and 84.8% on morphology type assignment. see more We found that in this context hierarchical models are not better than flat models and that an element-wise maximum aggregator is slightly better than attentive models on site classification. Moreover, the maximum aggregator offers a way to interpret the classification process.Eye-tracking technology is an innovative tool that holds promise for enhancing dementia screening. In this work, we introduce a novel way of extracting salient features directly from the raw eye-tracking data of a mixed sample of dementia patients during a novel instruction-less cognitive test. Our approach is based on self-supervised representation learning where, by training initially a deep neural network to solve a pretext task using well-defined available labels (e.g. recognising distinct cognitive activities in healthy individuals), the network encodes high-level semantic information which is useful for solving other problems of interest (e.g. dementia classification). Inspired by previous work in explainable AI, we use the Layer-wise Relevance Propagation (LRP) technique to describe our network's decisions in differentiating between the distinct cognitive activities. The extent to which eye-tracking features of dementia patients deviate from healthy behaviour is then explored, followed by a comparison between self-supervised and handcrafted representations on discriminating between participants with and without dementia. Our findings not only reveal novel self-supervised learning features that are more sensitive than handcrafted features in detecting performance differences between participants with and without dementia across a variety of tasks, but also validate that instruction-less eye-tracking tests can detect oculomotor biomarkers of dementia-related cognitive dysfunction. This work highlights the contribution of self-supervised representation learning techniques in biomedical applications where the small number of patients, the non-homogenous presentations of the disease and the complexity of the setting can be a challenge using state-of-the-art feature extraction methods.With the increasingly available electronic medical records (EMRs), disease prediction has recently gained immense research attention, where an accurate classifier needs to be trained to map the input prediction signals (e.g., symptoms, patient demographics, etc.) to the estimated diseases for each patient. However, existing machine learning-based solutions heavily rely on abundant manually labeled EMR training data to ensure accurate prediction results, impeding their performance in the existence of rare diseases that are subject to severe data scarcity. For each rare disease, the limited EMR data can hardly offer sufficient information for a model to correctly distinguish its identity from other diseases with similar clinical symptoms. Furthermore, most existing disease prediction approaches are based on the sequential EMRs collected for every patient and are unable to handle new patients without historical EMRs, reducing their real-life practicality. In this paper, we introduce an innovative model based on Graph Neural Networks (GNNs) for disease prediction, which utilizes external knowledge bases to augment the insufficient EMR data, and learns highly representative node embeddings for patients, diseases and symptoms from the medical concept graph and patient record graph respectively constructed from the medical knowledge base and EMRs. By aggregating information from directly connected neighbor nodes, the proposed neural graph encoder can effectively generate embeddings that capture knowledge from both data sources, and is able to inductively infer the embeddings for a new patient based on the symptoms reported in her/his EMRs to allow for accurate prediction on both general diseases and rare diseases. Extensive experiments on a real-world EMR dataset have demonstrated the state-of-the-art performance of our proposed model.Recent developments in machine learning algorithms have enabled models to exhibit impressive performance in healthcare tasks using electronic health record (EHR) data. However, the heterogeneous nature and sparsity of EHR data remains challenging. In this work, we present a model that utilizes heterogeneous data and addresses sparsity by representing diagnoses, procedures, and medication codes with temporal Hierarchical Clinical Embeddings combined with Topic modeling (HCET) on clinical notes. HCET aggregates various categories of EHR data and learns inherent structure based on hospital visits for an individual patient. We demonstrate the potential of the approach in the task of predicting depression at various time points prior to a clinical diagnosis. We found that HCET outperformed all baseline methods with a highest improvement of 0.07 in precision-recall area under the curve (PRAUC). Furthermore, applying attention weights across EHR data modalities significantly improved the performance as well as the model's interpretability by revealing the relative weight for each data modality. Our results demonstrate the model's ability to utilize heterogeneous EHR information to predict depression, which may have future implications for screening and early detection.The increasing penetration of wearable and implantable devices necessitates energy-efficient and robust ways of connecting them to each other and to the cloud. However, the wireless channel around the human body poses unique challenges such as a high and variable path-loss caused by frequent changes in the relative node positions as well as the surrounding environment. An adaptive wireless body area network (WBAN) scheme is presented that reconfigures the network by learning from body kinematics and biosignals. It has very low overhead since these signals are already captured by the WBAN sensor nodes to support their basic functionality. Periodic channel fluctuations in activities like walking can be exploited by reusing accelerometer data and scheduling packet transmissions at optimal times. Network states can be predicted based on changes in observed biosignals to reconfigure the network parameters in real time. A realistic body channel emulator that evaluates the path-loss for everyday human activities was developed to assess the efficacy of the proposed techniques.

Autoři článku: Haslundgilliam8607 (Dickinson Haley)