Karlssonkanstrup3445

Z Iurium Wiki

Mental health has become a growing concern in the medical field, yet remains difficult to study due to both privacy concerns and the lack of objectively quantifiable measurements (e.g., lab tests, physical exams). Instead, the data that is available for mental health is largely based on subjective accounts of a patient's experience, and thus typically is expressed exclusively in text. An important source of such data comes from online sources and directly from the patient, including many forms of social media. In this work, we utilize the datasets provided by the CLPsych shared tasks in 2016 and 2017, derived from online forum posts of ReachOut which have been manually classified according to mental health severity. We implemented an automated severity labeling system using different machine and deep learning algorithms. Our approach combines both supervised and semi-supervised embedding methods using corpus from ReachOut (both labeled and unlabelled) and WebMD (unlabelled). Metadata, syntactic, semantic, and embedding features were used to classify the posts into four categories (green, amber, red, and crisis). The developed systems outperformed other state-of-the-art systems developed on the ReachOut dataset and obtained the maximum micro- averaged F-scores of 0.86 and 0.80 for CLPsych 2016 and 2017 test datasets, respectively, using the above features.Diabetes mellitus is the putative cause of a number of pathologies occurring in the bony and soft tissues of the maxillo-facial region and is known to exacerbate other oral diseases such as periodontitis.We present the first use of clinical panoramic radiographs for a secondary analysis of disease, with a focus on identifying hotspots in the maxillofacial region that are associated with diabetes. We developed a curated data set using Consensus Landmark Points (CLPs) and used that data to develop an analysis pipeline. This pipeline entailed automatic data cleansing, registration, and intensity normalization. The pipeline was used to process 7280 uncurated images that were subsequently analyzed using pixel-wise methods for a case/control study of patients with a history of diabetes. We detected statistically significant clusters of pixels that demarcated anatomical hotspots specific to the diabetic patients.In this work, we aim to enhance the reliability of health information technology (HIT) systems by detection of plausible HIT hazards in clinical order transactions. In the absence of well-defined event logs in corporate data warehouses, our proposed approach identifies relevant timestamped data fields that could indicate transactions in the clinical order life cycle generating raw event sequences. Subsequently, we adopt state transitions of the OASIS Human Task standard to map the raw event sequences and simplify the complex process that clinical radiology orders go through. We describe how the current approach provides the potential to investigate areas of improvement and potential hazards in HIT systems using process mining. The discussion concludes with a use case and opportunities for future applications.Reflective writing is used by medical educators to identify challenges and promote inter-professional skills. These non-medical skills are central to leadership and career development, and are clinically relevant and vital to a trainees success as a practicing physician. However, identification of actionable feedback from reflective writings can be chal- lenging. In this work, we utilize a Natural Language Processing pipeline that incorporates a seeded Term Frequency- Inverse Document Frequency matrix along with sentence-level summarization, sentiment analysis, and clustering to organize sentences into groups, which can aid educators in assessing common challenges experienced by Acting In- terns. Automated analysis of reflective writing is difficult due to its subjective nature; however, our method is able to identify known and new challenges such as issues accessing the electronic health system and adjusting to specialty differences. Medical educators can utilize these topics to identify areas needing attention in the medical curriculum and help students through this transitional time.Polypharmacy is the use of drug combinations and is commonly used for treating complex and terminal diseases. Despite its effectiveness in many cases, it poses high risks of adverse side effects. Polypharmacy side-effects occur due to unwanted interactions of combined drugs, and they can cause severe complications to patients which results in increasing the risks of morbidity and leading to new mortalities. The use of drug polypharmacy is currently in its early stages; thus, the knowledge of their probable side-effects is limited. This encouraged multiple works to investigate machine learning techniques to efficiently and reliably predict adverse effects of drug combinations. In this context, the Decagon model is known to provide state-of-the-art results. It models polypharmacy side-effect data as a knowledge graph and formulates finding possible adverse effects as a link prediction task over the knowledge graph. The link prediction is solved using an embedding model based on graph convolutions. Despite its effectiveness, the Decagon approach still suffers from a high rate of false positives. In this work, we propose a new knowledge graph embedding technique that uses multi-part embedding vectors to predict polypharmacy side-effects. Like in the Decagon model, we model polypharmacy side effects as a knowledge graph. However, we perform the link prediction task using an approach based on tensor decomposition. Our experimental evaluation shows that our approach outperforms the Decagon model with 12% and 16% margins in terms of the area under the ROC and precision recall curves, respectively.Precision oncology research seeks to derive knowledge from existing data. Current work seeks to integrate clinical and genomic data across cancer centers to enable impactful secondary use. However, integrated data reliability depends on the data curation method used and its systematicity. In practice, data integration and mapping are often done manually even though crucial data such as oncological diagnoses (DX) show varying accuracy and specificity levels. We hypothesized that mapping of text-form cancer DX to a standardized terminology (OncoTree) could be automated using existing methods (e.g. BMS-927711 natural language processing (NLP) modules and application programming interfaces [APIs]). We found that our best-performing pipeline prototype was effective but limited by API development limitations (accurately mapped 96.2% of textual DX dataset to NCI Thesaurus (NCIt), 44.2% through NCIt to OncoTree). These results suggest the pipeline model could be viable to automate data curation. Such techniques may become increasingly more reliable with further development.

Autoři článku: Karlssonkanstrup3445 (Epstein Mccarthy)