Chengkaya4195

Z Iurium Wiki

Verze z 30. 9. 2024, 17:57, kterou vytvořil Chengkaya4195 (diskuse | příspěvky) (Založena nová stránka s textem „The proposed approach of using tactile ERD from the sensory cortex provides an effective way of reducing the calibration time in a somatosensory BCI system…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

The proposed approach of using tactile ERD from the sensory cortex provides an effective way of reducing the calibration time in a somatosensory BCI system.

The tactile stimulation would be specifically useful before BCI usage, avoiding excessive fatigue when the mental task is difficult to perform. The tactile ERD approach may find BCI applications for patients or users with preserved afferent pathways.

The tactile stimulation would be specifically useful before BCI usage, avoiding excessive fatigue when the mental task is difficult to perform. The tactile ERD approach may find BCI applications for patients or users with preserved afferent pathways.Melanoma is a fatal skin cancer that is curable and has dramatically increasing survival rate when diagnosed at early stages. Learning-based methods hold significant promise for the detection of melanoma from dermoscopic images. However, since melanoma is a rare disease, existing databases of skin lesions predominantly contain highly imbalanced numbers of benign versus malignant samples. In turn, this imbalance introduces substantial bias in classification models due to the statistical dominance of the majority class. To address this issue, we introduce a deep clustering approach based on the latent-space embedding of dermoscopic images. Clustering is achieved using a novel center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone. The proposed method aims to form maximally-separated cluster centers as opposed to minimizing classification error, so it is less sensitive to class imbalance. To avoid the need for labeled data, we further propose to implement COM-Triplet based on pseudo-labels generated by a Gaussian mixture model (GMM). Comprehensive experiments show that deep clustering with COM-Triplet loss outperforms clustering with triplet loss, and competing classifiers in both supervised and unsupervised settings.Ophthalmologists have used fundus images to screen and diagnose eye diseases. However, different equipments and ophthalmologists pose large variations to the quality of fundus images. Low-quality (LQ) degraded fundus images easily lead to uncertainty in clinical screening and generally increase the risk of misdiagnosis. Thus, real fundus image restoration is worth studying. Unfortunately, real clinical benchmark has not been explored for this task so far. In this paper, we investigate the real clinical fundus image restoration problem. Firstly, We establish a clinical dataset, Real Fundus (RF), including 120 low- and high-quality (HQ) image pairs. Then we propose a novel Transformer-based Generative Adversarial Network (RFormer) to restore the real degradation of clinical fundus images. The key component in our network is the Window-based Self-Attention Block (WSAB) which captures non-local self-similarity and long-range dependencies. To produce more visually pleasant results, a Transformer-based discriminator is introduced. Extensive experiments on our clinical benchmark show that the proposed RFormer significantly outperforms the state-of-the-art (SOTA) methods. In addition, experiments of downstream tasks such as vessel segmentation and optic disc/cup detection demonstrate that our proposed RFormer benefits clinical fundus image analysis and applications.Mind-wandering (MW), which is usually defined as a lapse of attention has negative effects on our daily life. Therefore, detecting when MW occurs can prevent us from those negative outcomes resulting from MW. In this work, we first collected a multi-modal Sustained Attention to Response Task (MM-SART) database for MW detection. Eighty-two participants' data were collected in our dataset. For each participant, we collected measures of 32-channels electroencephalogram (EEG) signals, photoplethysmography (PPG) signals, galvanic skin response (GSR) signals, eye tracker signals, and several questionnaires for detailed analyses. Then, we propose an effective MW detection system based on the collected EEG signals. To explore the non-linear characteristics of the EEG signals, we utilize entropy-based features. The experimental results show that we can reach 0.712 AUC score by using the random forest (RF) classifier with the leave-one-subject-out cross-validation. Moreover, to lower the overall computational complexity of the MW detection system, we propose correlation importance feature elimination (CIFE) along with AUC-based channel selection. By using two most significant EEG channels, we can reduce the training time of the classifier by 44.16%. By applying CIFE on the feature set, we can further improve the AUC score to 0.725 but with only 14.6% of the selection time compared with the recursive feature elimination (RFE). Finally, we can apply the current work to educational scenarios nowadays, especially in remote learning systems.

Hand movement decoding from electroencephalograms (EEG) signals is vital to the rehabilitation and assistance of upper limb-impaired patients. Few existing studies on hand movement decoding from EEG signals consider any distractions. However, in practice, patients can be distracted while using the hand movement decoding systems in real life. In this paper, we aim to investigate the effects of cognitive distraction on movement decoding performance.

We first propose a robust decoding method of hand movement directions to cognitive distraction from EEG signals by using the Riemannian Manifold to extract affine invariant features and Gaussian Naive Bayes classifier (named RM-GNBC). Then, we use the experimental and simulated EEG data under conditions without and with distraction to compare the decoding performance of three decoding methods (including the proposed method, tangent space linear discriminant analysis (TSLDA), and baseline method)).

The simulation and experimental results show that the Riemanniaaction on other BCI paradigms.Parkinson's disease (PD) is the second most prevalent neurodegenerative disease disorder in the world. A prompt diagnosis would enable clinical trials for disease-modifying neuroprotective therapies. Recent research efforts have unveiled imaging and blood markers that have the potential to be used to identify PD patients promptly, however, the idiopathic nature of PD makes these tests very hard to scale to the general population. To this end, we need an easily deployable tool that would enable screening for PD signs in the general population. In this work, we propose a new set of features based on keystroke dynamics, i.e., the time required to press and release keyboard keys during typing, and used to detect PD in an ecologically valid data acquisition setup at the subject's homes, without requiring any pre-defined task. We compare and contrast existing models presented in the literature and present a new model that combines a new type of keystroke dynamics signal representation using hold time and flight time series as a function of key types and asymmetry in the time series using a convolutional neural network. We show how this model achieves an Area Under the Receiving Operating Characteristic curve ranging from 0.80 to 0.83 on a dataset of subjects who actively interacted with their computers for at least 5 months and positively compares against other state-of-the-art approaches previously tested on keystroke dynamics data acquired with mechanical keyboards.Microwave-induced thermoacoustic (TA) imaging (MTAI), which exploits dielectric contrasts to provide images with high contrast and spatial resolution, holds the potential to serve as an additional means of clinical diagnosis and treatment. However, conventional MTAI usually uses large and heavy metal antennas to radiate pulsed microwaves, making it challenging to image different target areas flexibly. In this work, we presented the design and evaluation of a portable microwave-acoustic coaxial TA probe (51 mm × 63 mm × 138 mm) that can flexibly image the region of interest. The TA probe contains two miniaturized symmetrically distributed Vivaldi antennas (7.5 g) and a 128-element linear ultrasonic transducer. By adjusting the geometry of the antennas and the ultrasonic transducer, the TA probe's acoustic field and microwave field can be designed to be coaxial, which helps achieve homogeneous microwave illumination and high-sensitivity ultrasonic detection. The practical feasibility of the proposed probe was tested on an in vitro ewe breast and a healthy volunteer. The results demonstrate that the MTAI system with the proposed TA probe can visualize the anatomical structure of the breast tumor in ewe breast and a healthy volunteer breast with resolutions in hundreds of microns (transverse 910 μm, axial 780 μm) and an excellent signal-to-noise ratio can be obtained in deep adipose tissue (10 dB in 6 cm fat). The miniaturized portable TA probe takes a solid step forward in translating MTAI technology to clinical breast tumor diagnosis.Multifingered hand dexterous manipulation is quite challenging in the domain of robotics. One remaining issue is how to achieve compliant behaviors. In this work, we propose a human-in-the-loop learning-control approach for acquiring compliant grasping and manipulation skills of a multifinger robot hand. This approach takes the depth image of the human hand as input and generates the desired force commands for the robot. The markerless vision-based teleoperation system is used for the task demonstration, and an end-to-end neural network model (i.e., TeachNet) is trained to map the pose of the human hand to the joint angles of the robot hand in real-time. To endow the robot hand with compliant human-like behaviors, an adaptive force control strategy is designed to predict the desired force control commands based on the pose difference between the robot hand and the human hand during the demonstration. The force controller is derived from a computational model of the biomimetic control strategy in human motor learning, which allows adapting the control variables (impedance and feedforward force) online during the execution of the reference joint angles. LBH589 order The simultaneous adaptation of the impedance and feedforward profiles enables the robot to interact with the environment compliantly. Our approach has been verified in both simulation and real-world task scenarios based on a multifingered robot hand, that is, the Shadow Hand, and has shown more reliable performances than the current widely used position control mode for obtaining compliant grasping and manipulation behaviors.Change detection (CD) between heterogeneous images is an increasingly interesting topic in remote sensing. The different imaging mechanisms lead to the failure of homogeneous CD methods on heterogeneous images. To address this challenge, we propose a structure cycle consistency-based image regression method, which consists of two components the exploration of structure representation and the structure-based regression. We first construct a similarity relationship-based graph to capture the structure information of image; here, a k -selection strategy and an adaptive-weighted distance metric are employed to connect each node with its truly similar neighbors. Then, we conduct the structure-based regression with this adaptively learned graph. More specifically, we transform one image to the domain of the other image via the structure cycle consistency, which yields three types of constraints forward transformation term, cycle transformation term, and sparse regularization term. Noteworthy, it is not a traditional pixel value-based image regression, but an image structure regression, i.

Autoři článku: Chengkaya4195 (Alston Pearce)