Mckinneynymand4141
The aim of the present study was to develop and evaluate the usability of a telemedicine system for management and monitoring of patients with diabetic foot.
This study was conducted in four phases. In the first phase, the information needs and characteristics required to design the telemedicine system were identified based on a literature review. Then, in a two-stage Delphi survey, 15 experts approved the identified information needs and characteristics. The prototype telemedicine system was then designed. In the third phase, system usability was evaluated through a semi-structured interview. In the fourth phase, users' satisfaction with the designed system was analyzed.
Out of 115 information needs and required characteristics, 95 were considered in the system design. Eight main pages for enabling patient-physician interactions and physician-physician interactions, monitoring the patient and controlling the disease process, providing medical consultation, and prescribing medications were considered. I After examining the medical history or images and videos, the physician can provide the necessary medication prescriptions and laboratory tests or other recommendations.
E-detailing methods have steadily evolved toward more contactless and interactive channels, which have received considerable attention during the coronavirus disease 2019 (COVID-19) crisis. Based on the technology acceptance model, this study attempted to identify medical representatives' perceptions and attitudes towards individual innovativeness that affected users' intentions to adopt new e-detailing devices utilizing information and communication technology.
The subjects of the current study were medical representatives at three major multinational or domestic pharmaceutical companies that operate in South Korea. In total, 300 questionnaires were distributed and 221 were returned. The survey elicited information on respondents' perceived ease of use (PEOU), perceived usefulness (PU), personal innovativeness (PI), and user acceptance (UA) of remote e-detailing technology, in addition to demographic information and occupational characteristics. Structural equation models were fitted to the data. Separat play the role of early adopters of remote e-detailing if they find this technology to be more useful.
This study developed and compared the performance of three widely used predictive models-logistic regression (LR), artificial neural network (ANN), and decision tree (DT)-to predict diabetes mellitus using the socio-demographic, lifestyle, and physical attributes of a population of Nigerians.
We developed three predictive models using 10 input variables. Data preprocessing steps included the removal of missing values and outliers, min-max normalization, and feature extraction using principal component analysis. THAL-SNS-032 mw Data training and validation were accomplished using 10-fold cross-validation. Accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUROC) were used as performance evaluation metrics. Analysis and model development were performed in R version 3.6.1.
The mean age of the participants was 50.52 ± 16.14 years. The classification accuracy, sensitivity, specificity, PPV, and NPV for LR were, respectively, 81.31%, 84.32%, 77.24%, 72.75%, and 82.49%. Those for ANN were 98.64%, 98.37%, 99.00%, 98.61%, and 98.83%, and those for DT were 99.05%, 99.76%, 98.08%, 98.77%, and 99.82%, respectively. The best-performing and poorest-performing classifiers were DT and LR, with 99.05% and 81.31% accuracy, respectively. Similarly, the DT algorithm achieved the best AUC value (0.992) compared to ANN (0.976) and LR (0.892).
Our study demonstrated that DT, LR, and ANN models can be used effectively for the prediction of diabetes mellitus in the Nigerian population based on certain risk factors. An overall comparative analysis of the models showed that the DT model performed better than LR and ANN.
Our study demonstrated that DT, LR, and ANN models can be used effectively for the prediction of diabetes mellitus in the Nigerian population based on certain risk factors. An overall comparative analysis of the models showed that the DT model performed better than LR and ANN.
A primary brain tumor starts to grow from brain cells, and it occurs as a result of errors in the DNA of normal cells. Therefore, this study was carried out to analyze the two-dimensional (2D) texture, morphology, and statistical features of brain tumors and to perform a classification using artificial intelligence (AI) techniques.
AI techniques can help radiologists to diagnose primary brain tumors without using any invasive measurement techniques. In this paper, we focused on deep learning (DL) and machine learning (ML) techniques for texture, morphological, and statistical feature classification of three tumor types (namely, glioma, meningioma, and pituitary). T1-weighted magnetic resonance imaging (MRI) 2D scans were used for analysis and classification (multiclass and binary). A total of 102 features were calculated for each tumor, and the 20 most significant features were selected using the three-step feature selection method, which included removing duplicate features, Pearson correlations, and recursive feature elimination.
From the predicted results of multiclass and binary classification, a long short-term memory binary classification (glioma vs. meningioma) showed the best performance, with an average accuracy, recall, precision, F1-score, and kappa coefficient of 97.7%, 97.2%, 97.5%, 97.0%, and 94.7%, respectively.
The early diagnosis of primary brain tumors is very important because it can be the key to effective treatment. Therefore, this research presents a method for early diagnoses by effectively classifying three types of primary brain tumors.
The early diagnosis of primary brain tumors is very important because it can be the key to effective treatment. Therefore, this research presents a method for early diagnoses by effectively classifying three types of primary brain tumors.
This study analyzed the effects of computerization of medical information systems and a hospital payment scheme on medical care outcomes. Specifically, we examined the effects of Electronic Medical Records (EMRs) and a diagnosis procedure combination/per-diem payment scheme (DPC/PDPS) on the average length of hospital stay (ALOS).
Post-intervention changes in the monthly ALOS were measured using an interrupted time-series analysis.
The level changes observed in the monthly ALOS immediately post-DPC/PDPS were -1.942 (95% confidence interval [CI], -2.856 to -1.028), -1.885 (95% CI, -3.176 to -0.593), -1.581 (95% CI, -3.081 to -0.082) and -2.461 (95% CI, -3.817 to 1.105) days in all ages, <50, 50-64, and ≥65 years, respectively. During the post-DPC/PDPS period, trends of 0.107 (95% CI, 0.069 to 0.144), 0.048 (95% CI, -0.006 to 0.101), 0.183 (95% CI, 0.122 to 0.245) and 0.110 (95% CI, 0.054 to 0.167) days/month, respectively, were observed. During the post-EMR period, trends of -0.053 (95% CI, -0.080 to stainably reducing the LOS.
The aim of this study was to use discrete event simulation (DES) to model the impact of two universal suicide risk screening scenarios (emergency department [ED] and hospital-wide) on mean length of stay (LOS), wait times, and overflow of our secure patient care unit for patients being evaluated for a behavioral health complaint (BHC) in the ED of a large, academic children's hospital.
We developed a conceptual model of BHC patient flow through the ED, incorporating anticipated system changes with both universal suicide risk screening scenarios. Retrospective site-specific patient tracking data from 2017 were used to generate model parameters and validate model output metrics with a random 50/50 split for derivation and validation data.
The model predicted small increases (less than 1 hour) in LOS and wait times for our BHC patients in both universal screening scenarios. However, the days per year in which the ED experienced secure unit overflow increased (existing system 52.9 days; 95% CI, 51.5-54.3 days; ED 94.4 days; 95% CI, 92.6-96.2 days; and hospital-wide 276.9 days; 95% CI, 274.8-279.0 days).
The DES model predicted that implementation of either universal suicide risk screening scenario would not severely impact LOS or wait times for BHC patients in our ED. However, universal screening would greatly stress our existing ED capacity to care for BHC patients in secure, dedicated patient areas by creating more overflow.
The DES model predicted that implementation of either universal suicide risk screening scenario would not severely impact LOS or wait times for BHC patients in our ED. However, universal screening would greatly stress our existing ED capacity to care for BHC patients in secure, dedicated patient areas by creating more overflow.
De-identifying protected health information (PHI) in medical documents is important, and a prerequisite to deidentification is the identification of PHI entity names in clinical documents. This study aimed to compare the performance of three pre-training models that have recently attracted significant attention and to determine which model is more suitable for PHI recognition.
We compared the PHI recognition performance of deep learning models using the i2b2 2014 dataset. We used the three pre-training models-namely, bidirectional encoder representations from transformers (BERT), robustly optimized BERT pre-training approach (RoBERTa), and XLNet (model built based on Transformer-XL)-to detect PHI. After the dataset was tokenized, it was processed using an inside-outside-beginning tagging scheme and WordPiecetokenized to place it into these models. Further, the PHI recognition performance was investigated using BERT, RoBERTa, and XLNet.
Comparing the PHI recognition performance of the three models, it was confirmed that XLNet had a superior F1-score of 96.29%. In addition, when checking PHI entity performance evaluation, RoBERTa and XLNet showed a 30% improvement in performance compared to BERT.
Among the pre-training models used in this study, XLNet exhibited superior performance because word embedding was well constructed using the two-stream self-attention method. In addition, compared to BERT, RoBERTa and XLNet showed superior performance, indicating that they were more effective in grasping the context.
Among the pre-training models used in this study, XLNet exhibited superior performance because word embedding was well constructed using the two-stream self-attention method. In addition, compared to BERT, RoBERTa and XLNet showed superior performance, indicating that they were more effective in grasping the context.
Smart hospitals involve the application of recent information and communications technology (ICT) innovations to medical services; however, the concept of a smart hospital has not been rigorously defined. In this study, we aimed to derive the definition and service types of smart hospitals and investigate cases of each type.
A literature review was conducted regarding the background and technical characteristics of smart hospitals. On this basis, we conducted a focus group interview with experts in hospital information systems, and ultimately derived eight smart hospital service types.
Smart hospital services can be classified into the following types services based on location recognition and tracking technology that measures and monitors the location information of an object based on short-range communication technology; high-speed communication network-based services based on new wireless communication technology; Internet of Things-based services that connect objects embedded with sensors and communication functions to the internet; mobile health services such as mobile phones, tablets, and wearables; artificial intelligence-based services for the diagnosis and prediction of diseases; robot services provided on behalf of humans in various medical fields; extended reality services that apply hyper-realistic immersive technology to medical practice; and telehealth using ICT.