Livingstonjunker0261

Z Iurium Wiki

Verze z 6. 11. 2024, 15:41, kterou vytvořil Livingstonjunker0261 (diskuse | příspěvky) (Založena nová stránka s textem „The study reports the performance of Parkinson's disease (PD) patients to operate Motor-Imagery based Brain-Computer Interface (MI-BCI) and compares three…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

The study reports the performance of Parkinson's disease (PD) patients to operate Motor-Imagery based Brain-Computer Interface (MI-BCI) and compares three selected pre-processing and classification approaches. The experiment was conducted on 7 PD patients who performed a total of 14 MI-BCI sessions targeting lower extremities. EEG was recorded during the initial calibration phase of each session, and the specific BCI models were produced by using Spectrally weighted Common Spatial Patterns (SpecCSP), Source Power Comodulation (SPoC) and Filter-Bank Common Spatial Patterns (FBCSP) methods. The results showed that FBCSP outperformed SPoC in terms of accuracy, and both SPoC and SpecCSP in terms of the false-positive ratio. The study also demonstrates that PD patients were capable of operating MI-BCI, although with lower accuracy.In order to explore the effect of low frequency stimulation on pupil size and electroencephalogram (EEG), we presented subjects with 1-6Hz black-and-white-alternating flickering stimulus, and compared the differences of signal-to-noise ratio (SNR) and classification performance between pupil size and visual evoked potentials (VEPs). AT7519 chemical structure The results showed that the SNR of the pupillary response reached the highest at 1Hz (17.19± 0.10dB) and 100% accuracy was obtained at 1s data length, while the performance was poor at the stimulation frequency above 3Hz. In contrast, the SNR of VEPs reached the highest at 6Hz (18.57± 0.37dB), and the accuracy of all stimulus frequencies could reach 100%, with the minimum data length of 1.5s. This study lays a theoretical foundation for further implementation of a hybrid brain-computer interface (BCI) that integrates pupillometry and EEG.Studies have shown the possibility of using brain signals that are automatically generated while observing a navigation task as feedback for semi-autonomous control of a robot. This allows the robot to learn quasi-optimal routes to intended targets. We have combined the subclassification of two different types of navigational errors, with the subclassification of two different types of correct navigational actions, to create a 4-way classification strategy, providing detailed information about the type of action the robot performed. We used a 2-stage stepwise linear discriminant analysis approach, and tested this using brain signals from 8 and 14 participants observing two robot navigation tasks. Classification results were significantly above the chance level, with mean overall accuracy of 44.3% and 36.0% for the two datasets. As a proof of concept, we have shown that it is possible to perform fine-grained, 4-way classification of robot navigational actions, based on the electroencephalogram responses of participants who only had to observe the task. This study provides the next step towards comprehensive implicit brain-machine communication, and towards an efficient semi-autonomous brain-computer interface.In the design of brain-machine interface (BMI), as the number of electrodes used to collect neural spike signals declines slowly, it is important to be able to decode with fewer units. We tried to train a monkey to control a cursor to perform a two-dimensional (2D) center-out task smoothly with spiking activities only from two units (direct units). At the same time, we studied how the direct units did change their tuning to the preferred direction during BMI training and tried to explore the underlying mechanism of how the monkey learned to control the cursor with their neural signals. In this study, we observed that both direct units slowly changed their preferred directions during BMI learning. Although the initial angles between the preferred directions of 3 pairs units are different, the angle between their preferred directions approached 90 degrees at the end of the training. Our results imply that BMI learning made the two units independent of each other. To our knowledge, it is the first time to demonstrate that only two units could be used to control a 2D cursor movements. Meanwhile, orthogonalizing the activities of two units driven by BMI learning in this study implies that the plasticity of the motor cortex is capable of providing an efficient strategy for motor control.The success of deep learning (DL) methods in the Brain-Computer Interfaces (BCI) field for classification of electroencephalographic (EEG) recordings has been restricted by the lack of large datasets. Privacy concerns associated with EEG signals limit the possibility of constructing a large EEG-BCI dataset by the conglomeration of multiple small ones for jointly training machine learning models. Hence, in this paper, we propose a novel privacy-preserving DL architecture named federated transfer learning (FTL) for EEG classification that is based on the federated learning framework. Working with the single-trial covariance matrix, the proposed architecture extracts common discriminative information from multi-subject EEG data with the help of domain adaptation techniques. We evaluate the performance of the proposed architecture on the PhysioNet dataset for 2-class motor imagery classification. While avoiding the actual data sharing, our FTL approach achieves 2% higher classification accuracy in a subject-adaptive analysis. Also, in the absence of multi-subject data, our architecture provides 6% better accuracy compared to other state-of-the-art DL architectures.The concept of 'presence' in the context of virtual reality (VR) refers to the experience of being in the virtual environment, even when one is physically situated in the real world. Therefore, it is a key parameter of assessing a VR system, based on which, improvements can be made to it. To overcome the limitations of existing methods that are based on standard questionnaires and behavioral analysis, this study proposes to investigate the suitability of biosignals of the user to derive an objective measure of presence. The proposed approach includes experiments conducted on 20 users, recording EEG, ECG and electrodermal activity (EDA) signals while experiencing custom designed VR scenarios with factors contributing to presence suppressed and unsuppressed. Mutual Information based feature selection and subsequent paired t-tests used to identify significant variations in biosignal features when each factor of presence is suppressed revealed significant (p less then 0.05) differences in the mean values of EEG signal power and coherence within alpha, beta and gamma bands distributed in specific regions of the brain.

Autoři článku: Livingstonjunker0261 (Wentworth Whitfield)