Brittnieves2049

Z Iurium Wiki

Automatic seizure prediction promotes the development of closed-loop treatment system on intractable epilepsy. In this study, by considering the specific information exchange between EEG channels from the perspective of whole brain activities, the convolution neural network (CNN) and the directed transfer function (DTF) were merged to present a novel method for patient-specific seizure prediction. Firstly, the intracranial electroencephalogram (iEEG) signals were segmented and the information flow features of iEEG signals were calculated by using the DTF algorithm. Then, these features were reconstructed as the channel-frequency maps according to channel pairs and the frequency of information flow. Finally, these maps were fed into the CNN model and the outputs were post-processed by the moving average approach to predict the epileptic seizures. By the evaluation of cross-validation method, the proposed algorithm achieved the averaged sensitivity of 90.8%, the averaged false prediction rate of 0.08 per hour. Compared to the random predictor and other existing algorithms tested on the Freiburg EEG dataset, our proposed method achieved better performance for seizure prediction in all patients. These results demonstrated that the proposed algorithm could provide an robust seizure prediction solution by using deep learning to capture the brain network changes of iEEG signals from epileptic patients.Several studies demonstrated that functional magnetic resonance imaging (fMRI) signals in early visual cortex can be used to reconstruct 2-dimensional (2D) visual contents. However, it remains unknown how to reconstruct 3-dimensional (3D) visual stimuli from fMRI signals in visual cortex. 3D visual stimuli contain 2D visual features and depth information. Moreover, binocular disparity is an important cue for depth perception. Thus, it is more challenging to reconstruct 3D visual stimuli than 2D visual stimuli from the fMRI signals of visual cortex. This study aimed to reconstruct 3D visual images by constructing three decoding models contrast-decoding, disparity-decoding and contrast-disparity-decoding models, and testing these models with fMRI data from humans viewing 3D contrast images. The results revealed that the 3D contrast stimuli can be reconstructed from the visual cortex. And the early visual regions (V1, V2) showed predominant advantages in reconstructing the contrast in 3D images for the contrast-decoding model. The dorsal visual regions (V3A, V7 and MT) showed predominant advantages in decoding the disparity in 3D images for the disparity-decoding model. The combination of the early and dorsal visual regions showed predominant advantages in decoding both the contrast and disparity for the contrast-disparity-decoding model. The results suggested that the contrast and disparity in 3D images were mainly represented in the early and dorsal visual regions separately. The two visual systems may interact with each other to decode 3D-contrast images.Brainprint is a new type of biometric in the form of EEG, directly linking to intrinsic identity. Currently, most methods for brainprint recognition are based on traditional machine learning and only focus on a single brain cognition task. see more Due to the ability to extract high-level features and latent dependencies, deep learning can effectively overcome the limitation of specific tasks, but numerous samples are required for model training. Therefore, brainprint recognition in realistic scenes with multiple individuals and small amounts of samples in each class is challenging for deep learning. This article proposes a Convolutional Tensor-Train Neural Network (CTNN) for the multi-task brainprint recognition with small number of training samples. Firstly, local temporal and spatial features of the brainprint are extracted by the convolutional neural network (CNN) with depthwise separable convolution mechanism. Afterwards, we implement the TensorNet (TN) via low-rank representation to capture the multilinear intercorrelations, which integrates the local information into a global one with very limited parameters. The experimental results indicate that CTNN has high recognition accuracy over 99% on all four datasets, and it exploits brainprint under multi-task efficiently and scales well on training samples. Additionally, our method can also provide an interpretable biomarker, which shows specific seven channels are dominated for the recognition tasks.The widespread development of new ultrasound image formation techniques has created a need for a standardized methodology for comparing the resulting images. Traditional methods of evaluation use quantitative metrics to assess the imaging performance in specific tasks such as point resolution or lesion detection. Quantitative evaluation is complicated by unconventional new methods and non-linear transformations of the dynamic range of data and images. Transformationindependent image metrics have been proposed for quantifying task performance. However, clinical ultrasound still relies heavily on visualization and qualitative assessment by expert observers. We propose the use of histogram matching to better assess differences across image formation methods. We briefly demonstrate the technique using a set of sample beamforming methods and discuss the implications of such image processing. We present variations of histogram matching and provide code to encourage application of this method within the imaging community.Focused ultrasound (FUS) therapies induce therapeutic effects in localized tissues using either temperature elevations or mechanical stresses caused by an ultrasound wave. During an FUS therapy, it is crucial to continuously monitor the position of the FUS beam in order to correct for tissue motion and keep the focus within the target region. Toward the goal of achieving real-time monitoring for FUS therapies, we have developed a method for the real-time visualization of an FUS beam using ultrasonic backscatter. The intensity field of an FUS beam was reconstructed using backscatter from an FUS pulse received by an imaging array and then overlaid onto a B-mode image captured using the same imaging array. The FUS beam visualization allows one to monitor the position and extent of the FUS beam in the context of the surrounding medium. Variations in the scattering properties of the medium were corrected in the FUS beam reconstruction by normalizing based on the echogenicity of the coaligned B-mode image. On average, normalizing by echogenicity reduced the mean square error between FUS beam reconstructions in nonhomogeneous regions of a phantom and baseline homogeneous regions by 21.

Autoři článku: Brittnieves2049 (Lundgreen Locklear)