Baunsalomonsen7159

Z Iurium Wiki

Results from simulations using synthetic data and human data from a BCI study show that the enhanced adaptive stimulus selection algorithm can improve spelling speeds relative to conventional BCI stimulus presentation paradigms.Clinical relevance-Increased communication rates with our enhanced adaptive stimulus selection algorithm can potentially facilitate the translation of BCIs as viable communication alternatives for individuals with severe neuromuscular limitations.Attention, a multi-faceted cognitive process, is essential in our daily lives. We can measure visual attention using an EEG Brain-Computer Interface for detecting different levels of attention in gaming, performance training, and clinical applications. In attention calibration, we use Flanker task to capture EEG data for attentive class. For EEG data belonging to inattentive class calibration, we instruct subject not focusing on a specific position on screen. We then classify attention levels using binary classifier trained with these surrogate ground-truth classes. However, subjects may not be in desirable attention conditions when performing repetitive boring activities over a long experiment duration. We propose attention calibration protocols in this paper that use simultaneous visual search with an audio directional change paradigm and static white noise as 'attentive' and 'inattentive' conditions, respectively. To compare the performance of proposed calibrations against baselines, we collected data from sixteen healthy subjects. For a fair comparison of classification performance; we used six basic EEG band-power features with a standard binary classifier. With the new calibration protocol, we achieved 74.37 ± 6.56% mean subject accuracy, which is about 3.73 ± 2.49% higher than the baseline, but there were no statistically significant differences. According to post-experiment survey results, new calibrations are more effective in inducing desired perceived attention levels. We will improve calibration protocols with reliable attention classifier modeling to enable better attention recognition based on these promising results.Alzheimer's disease (AD) is the most prevalent neurodegenerative disorder and the most common form of dementia in the elderly. Because gene is an important clinical risk factor resulting in AD, genomic studies, such as genome-wide association studies (GWAS), have widely been applied into AD studies. However, main shortcomings of GWAS method were that hereditary deletions were evident in the GWAS studies, which resulted in low classification or prediction abilities by using GWAS analysis. Therefore, this paper proposed a novel deep learning genomics approach and applied it to discriminate AD patients and healthy control (HC) subjects. In this study, we selected genotype data of 988 subjects enrolled in the ADNI, including 622 AD patients and 366 HC subjects. The proposed deep learning genomics (DLG) approach was composed of three steps quality control, SNP genotype coding, and classification. The Resnet framework was used as the DLG model in this study. In the comparative GWAS analysis, APOE ε4 status and the normalized theta-value of the significant SNP loci were seen as predictors to classify genetically using Support Vector Machine (SVM). All data were divided into one training & validation group and one test group. 5-fold cross-validation was used in 500 times. Finally, we compared the classification results between DLG model and traditional GWAS analysis. As a result, the accuracy, sensitivity, and specificity of classification for traditional GWAS analysis was 71.38%±0.63%, 63.13%±2.87% and 85.59%±6.66% in the test group; while the accuracy, sensitivity, and specificity of classification for DLG model was 92.65%±4.80%, 85.00%±16.25% and 97.10%±4.38% in the test group. Hence, the DLG model can achieve higher accuracy and sensitivity when applied to AD. More importantly, we discovered several novel genetic biomarkers of AD, including rs6311 and rs6313 in HTR2A, and rs690705 in RFC3. The roles of these novel loci in AD should be explored future.Transcorneal electrical stimulation (TES) is a noninvasive approach for activating the retina and its downstream components through the application of electric current on the cornea. Although previous studies have demonstrated the clinical relevance of TES for modulating neurons with improvements in visual evoked potentials (VEPs) and electroretinograms (ERGs), there are still huge gaps in knowledge of its effect on the brain structures. To determine the short-term impact as well as the aftereffects of TES on neural oscillatory power in retinal degeneration mice, we performed electrocorticography (ECoG) recording in the prefrontal and primary visual cortices at different stages of prolonged TES [transient stage, following prolonged stimulation (post-stimulation stage 1) and long after the end of the retinal stimulation (post-stimulation stage 2)]) under varying stimulation current amplitudes (400 µA, 500 µA and 600 µA). The results revealed asymmetric differences between short-term and long-term pTES under diwer/activity of cortical oscillations. For example, by increasing the activity of oscillations that have been reported to inhibit irrelevant neural processes and enable the brain to focus on more relevant neural processes thus, inducing better coordination in the cortex.This paper presents an ultra-low power mixed-signal neural data acquisition (MSN-DAQ) system that enables a novel low-power hybrid-domain neural decoding architecture for implantable brain-machine interfaces with high channel count. Implemented in 180nm CMOS technology, the 32-channel custom chip operates at 1V supply voltage and achieves excellent performance including 1.07µW/channel, 2.37/5.62 NEF/PEF and 88dB common-mode rejection ratio (CMRR) with significant back-end power-saving advantage compared to prior works. The fabricated prototype was further evaluated with in vivo human tests at bedside, and its performance closely follows that of a commercial recording system.Brain decoding is able to make human interact with an external machine or robot for assisting patient's rehabilitation. Brain generic object recognition ability can be decoded through multiple neuroimaging modalities like functional magnetic resonance imaging (fMRI). On the other hand, external machine may wrongly recognize objects due to distorted noisy or blurring images caused by many factors, and therefore deteriorate performance of brain-machine interaction. In order to create better machine, generalization capability of human brain is transferred to classifier for enhancing classification accuracy of distorted images. Since homology existing between human and machine vision has been demonstrated, through decoding neural activity features of fMRI signals into feature units of convolutional neural network layers, an enhanced object recognition method is proposed to integrate brain activity into classifier for increasing classification accuracy. Experimental results show that the proposed method is able to enhance generalization capability of distorted object recognition.Alzheimer's disease (AD) is the most prevalent neurodegenerative disorder and considerably determined by genetic factors. selleck Fluorodeoxyglucose positron emission tomography (FDG-PET) can reflect the functional state of glucose metabolism in the brain, and radiomic features of FDG-PET were considered as important imaging markers in AD. However, radiomic features are not highly interpretable, especially lack of explanation of underlying biological and molecular mechanisms. Therefore, this study used radiogenomics analysis to explore prognostic metabolic imaging markers by associating radiomics features and genetic data. In the study, we used the FDG-PET images and genotype data of 389 subjects (Cohort B) enrolled in the ADNI, including 109 AD, 134 healthy controls (HCs), 72 MCI non-converters (MCI-nc) and 74 MCI converters (MCI-c). Firstly, we performed a Genome-wide association study (GWAS) on the genotype data of 998 subjects (Cohort A), including 632 AD and 366 HCs after quality control (QC) steps to identify susceptibility loci as the gene features. Secondly, radiomics features were extracted from the preprocessed PET images. Thirdly, two-sample t-test, rank sum test and F-score were regarded as the feature selection step to select effective radiomic features. Fourthly, a support vector machine (SVM) was used to test the ability of the radiomic features to classify HCs, MCI and AD patients. Finally, we performed the Spearman correlation analysis on the genetic data and radiomic features. As a result, we identified rs429358 and rs2075650 as genome-wide significant signals. The radiomic approach achieved good classification abilities. Two prognostic FDG-PET radiomic features in the amygdala were proven to be correlated with the genetic data.The "screening" trend of modern society has been a progressively increasing burden on the human visual system, and visual fatigue problems are attracting growing attention. Nowadays, subjective testing is the most widely used measure for visual fatigue; however, the low accuracy of subjective testing has been hindering its further improvement. Motivated by the idea of weighted scoring, this study investigated the effects of two weighted scales for measuring visual fatigue in screening tasks. Specifically, a questionnaire with 10 items collected from the classic scales was performed with eye-tracking testing in two typical screen visual fatigue experiments, i.e., searching and watching. Then the subjective scores were factor-analyzed into three subscales before attempting linear regression analyses, which set the dependents to two previously validated eye-tracking parameters, i.e., fixation frequency and saccade amplitude. Finally, two weighted scales were obtained in assessing visual fatigue of varying levels, which demonstrated the potential to improve testing accuracy of visual fatigue with the calibration of objective measurement.We developed a virtual reality (VR)-based gait training system, which could be used by inpatients to train their gait function in a simulated home environment, to reduce the risk of falling after discharge. The proposed system simulates the home environment on a head-mounted display, in which a user can walk around freely. The system provides visual feedback in the event of a collision with an indoor object such as a wall or furniture, prompting the user to modify his or her gait pattern. We first applied the system to healthy young adults and confirmed the usefulness of visual feedback in reducing the walking time and the number of collisions in the simulated room environment. Further, we applied the system to an inpatient with stroke and lower limb paralysis. The patient performed gait training based on a scenario of daily activity using the VR environment that mimicked his house. Five days of training significantly improved the gait and balance functions of the patient. These results suggest that the proposed system foster attention to the surrounding environment and improve gait function in both healthy participants and patients with stroke.Clinical Relevances-This study establishes the importance of visual feedback for VR-based gait training. Additionally, it provides a novel application of VR for gait and balance training in patients with stroke and lower limb paralysis.

Autoři článku: Baunsalomonsen7159 (Husted Floyd)