Hollisrooney6047

Z Iurium Wiki

Verze z 2. 10. 2024, 15:29, kterou vytvořil Hollisrooney6047 (diskuse | příspěvky) (Založena nová stránka s textem „The feasibility of electroencephalography (EEG) analysis in evaluating mental workload of gaming was studied by carrying out a proof-of-concept type experi…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

The feasibility of electroencephalography (EEG) analysis in evaluating mental workload of gaming was studied by carrying out a proof-of-concept type experiment on a set of EEG recordings, with a bespoke tool developed for the purpose. The EEG recordings (20 recordings in total) that were used in the experiment had been acquired by groups of students and staff of Tampere University during n-back gaming sessions, as part of course projects. The ratio of theta and alpha power, calculated over the EEG signal segments that were time-locked to game events, was selected as EEG metrics for mental load evaluation. Also, Phase Locking Value (PLV) was calculated for all pairs of EEG channels to assess the change in phase synchronization with the increasing difficulty level of the game. Wilcoxon rank-sum test was used to compare the metrics between the levels of the game (from 1-back to 4-back). The rank-sum test results revealed that the theta-alpha power ratio calculated from the frontal derivations Fp1 and Fp2 performed as a confident indicator for the evaluation and comparison of mental load. Also, phase locking between EEG derivations was found to become stronger with the increasing difficulty level of the game, especially in the case of channel pairs where the electrodes were located at opposite hemispheres.Mitral valve prolapse (MVP) is one of the cardiovascular valve abnormalities that occurs due to the stretching of mitral valve leaflets, which develops in around 2 percent of the population. MVP is usually detected via auscultation and diagnosed with an echocardiogram, which is an expensive procedure. The characteristic auscultatory finding in MVP is a mid-to-late systolic click which is usually followed by a high-pitched systolic murmur. These can be easily detected on a phonocardiogram which is a graphical representation of the auscultatory signal. In this paper, we have proposed a method to automatically identify patterns in the PCG that can help in diagnosing MVP as well as monitor its progression into Mitral Regurgitation. In the proposed methodology the systolic part, which is the region of interest here, is isolated by preprocessing and thresholded Teager-Kaiser energy envelope of the signal. Scalogram images of the systole part are obtained by applying continuous wavelet transform. These scalograms are used to train the convolutional neural network (CNN). A two-layer CNN could identify the event patterns with nearly 100% accuracy on the test dataset with varying sizes (20% - 40% of the entire data). The proposed method shows potential in the quick screening of MVP patients.Stroke is one of the main causes of disability in human beings, and when the occipital lobe is affected, this leads to partial vision loss (homonymous hemianopia). To understand brain mechanisms of vision loss and recovery, graph theory-based brain functional connectivity network (FCN) analysis was recently introduced. However, few brain network studies exist that have studied if the strength of the damaged FCN can predict the extent of functional impairment. We now characterized the brain FCN using deep neural network analysis to describe multiscale brain networks and explore their corresponding physiological patterns. SAR302503 In a group of 24 patients and 24 controls, Bi-directional long short-term memory (Bi-LSTM) was evaluated to reveal the cortical network pattern learning efficiency compared with other traditional algorithms. Bi-LSTM achieved the best balanced-overall accuracy of 73% with sensitivity of 70% and specificity and 75% in the low alpha band. This demonstrates that bi-directional learning can capture the brain network feature representation of both hemispheres. It shows that brain damage leads to reorganized FCN patterns with a greater number of functional connections of intermediate density in the high alpha band. Future studies should explore how this understanding of brain FCN can be used for clinical diagnostics and rehabilitation.Osteoporosis is a metabolic osteopathy syndrome, and the incidence of osteoporosis increases significantly with age. Currently, bone quantitative ultrasound (QUS) has been considered as a potential method for screening and diagnosing osteoporosis. However, its diagnostic accuracy is quite low. By contrast, deep learning based methods have shown the great power for extracting the most discriminative features from complex data. To improve the osteoporosis diagnostic accuracy and take advantages of QUS, we devise a deep learning method based on ultrasound radio frequency (RF) signal. Specifically, we construct a multi-channel convolutional neural network (MCNN) combined with a sliding window scheme, which can enhance the number of data as well. By using speed of sound (SOS), the quantitative experimental results of our preliminary study indicate that our proposed osteoporosis diagnosis method outperforms the conventional ultrasound methods, which may assist the clinician for osteoporosis screening.The use of a large and diversified ground-truth synthetic fNIRS dataset enables researchers to objectively validate and compare data analysis procedures. In this work, we describe each step of the synthetic data generation workflow and we provide tools to generate the dataset.This study presents the implementation of a within-subject classification method, based on the use of Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM), for the classification of hemodynamic responses. Using a synthetic dataset that closely resembles real experimental infant functional near-infrared spectroscopy (fNIRS) data, the impact of different levels of noise and different HRF amplitudes on the classification performances of the two classifiers are quantitively investigated.Individuals with Autism Spectrum Disorder (ASD) are known to have significantly limited social interaction abilities, which are often manifested in different non-verbal cues of communication such as facial expression, atypical eye gaze response. While prior works leveraged the role of pupil response for screening ASD, limited works have been carried out to find the influence of emotion stimuli on pupil response for ASD screening. We, in this paper, design, develop, and evaluate a light-weight LSTM (Long-short Term Memory) model that captures pupil responses (pupil diameter, fixation duration, and fixation location) based on the social interaction with a virtual agent and detects ASD sessions based on short interactions. Our findings demonstrate that all the pupil responses vary significantly in the ASD sessions in response to the different emotion (angry, happy, neutral) stimuli applied. These findings reinforce the ASD screening with an average accuracy of 77%, while the accuracy improves further (>80%) with respect to angry and happy emotion stimuli.Tinnitus is attributed by the perception of a sound without any physical source causing the symptom. Symptom profiles of tinnitus patients are characterized by a large heterogeneity, which is a major obstacle in developing general treatments for this chronic disorder. As tinnitus patients often report severe constraints in their daily life, the lack of general treatments constitutes such a challenge that patients crave for any kind of promising method to cope with their tinnitus, even if it is not based on evidence. Another drawback constitutes the lack of objective measurements to determine the individual symptoms of patients. Many data sources are therefore investigated to learn more about the heterogeneity of tinnitus patients in order to develop methods to measure the individual situation of patients more objectively. As research assumes that tinnitus is caused by processes in the brain, electroencephalography (EEG) data are heavily investigated by researchers. Following this, we address the question whether EEG data can be used to classify tinnitus using a deep neural network. For this purpose, we analyzed 16,780 raw EEG samples from 42 subjects (divided into tinnitus patients and control group), with a duration of one second per sample. Four different procedures (with or without noise reduction and down-sampling or up-sampling) for automated preprocessing were used and compared. Subsequently, a neural network was trained to classify whether a sample refers to a tinnitus patient or the control group. We obtain a maximum accuracy in the test set of 75.6% using noise reduction and down-sampling. Our findings highlight the potential of deep learning approaches to detect EEG patterns for tinnitus patients as they are difficult to be recognized by humans.Image decoding using electroencephalogram (EEG) has became a new topic for brain-computer interface (BCI) studies in recent years. Previous studies often tried to decode EEG signals modulated by a picture of complex object. However, it's still unclear how a simple image with different positions and orientations influence the EEG signals. To this end, this study used a same white bar with eight different spatial patterns as visual stimuli. Convolutional neural network (CNN) combined with long short-term memory (LSTM) was employed to decode the corresponding EEG signals. Four subjects were recruited in this study. As a result, the highest binary classification accuracy could reach 97.2%, 95.7%, 90.2%, and 88.3% for the four subjects, respectively. Almost all subjects could achieve more than 70% for 4-class classification. The results demonstrate basic graphic shapes are decodable from EEG signals, which hold promise for image decoding of EEG-based BCIs.The traditional marketing research tools (Personal Depth Interview, Surveys, FGD, etc.) are cost-prohibitive and often criticized for not extracting true consumer preferences. Neuromarketing tools promise to overcome such limitations. In this study, we proposed a framework, MarketBrain, to predict consumer preferences. In our experiment, we administered marketing stimuli (five products with endorsements), collected EEG signals by EMOTIV EPOC+, and used signal processing and classification algorithms to develop the prediction system. Wavelet Packet Transform was used to extract frequency bands (δ, θ, α, β1, β2, γ) and then statistical features were extracted for classification. Among the classifiers, Support Vector Machine (SVM) achieved the best accuracy (96.01±0.71) using 5-fold cross-validation. Results also suggested that specific target consumers and endorser appearance affect the prediction of the preference. So, it is evident that EEG-based neuromarketing tools can help brands and businesses effectively predict future consumer preferences. Hence, it will lead to the development of an intelligent market driving system for neuromarketing applications.This study investigated the effects of different center of mass (COM) of the grasping device and visual time-delay on the information interaction between brain regions during five-finger grasping process. Nine healthy right-handed subjects used five fingers to grasp a special device in a virtual reality (VR) environment. Two independent variables were set in the experiment the COM of the grasping device and the visual delay time. Place a 50 g mass randomly at five different directions of the grasping device base. The three levels of visual delay time appear randomly. The kinematics and dynamics and electroencephalogram (EEG) signals were recorded during the experiment. The brain network was constructed based on multiplex horizontal visibility graph (MHVG). Interlayer mutual information (MI) and phase locking value (PLV) were calculated to quantify the network, while clustering coefficient (C), shortest path length (L) and overall network efficiency (E) are selected to quantify the network characteristic. Statistical results show that when the mass is located in the radial side, during the load phase of grasping, the C and E is significantly higher than that in the proximal, ulnar and medial side, and L was significantly lower than that in the proximal and radial side.

Autoři článku: Hollisrooney6047 (MacKay Hess)