Frederickwhittaker1540

Z Iurium Wiki

Image segmentation is one of the most essential biomedical image processing problems for different imaging modalities, including microscopy and X-ray in the Internet-of-Medical-Things (IoMT) domain. However, annotating biomedical images is knowledge-driven, time-consuming, and labor-intensive, making it difficult to obtain abundant labels with limited costs. Active learning strategies come into ease the burden of human annotation, which queries only a subset of training data for annotation. Despite receiving attention, most of active learning methods still require huge computational costs and utilize unlabeled data inefficiently. They also tend to ignore the intermediate knowledge within networks. In this work, we propose a deep active semi-supervised learning framework, DSAL, combining active learning and semi-supervised learning strategies. In DSAL, a new criterion based on deep supervision mechanism is proposed to select informative samples with high uncertainties and low uncertainties for strong labelers and weak labelers respectively. The internal criterion leverages the disagreement of intermediate features within the deep learning network for active sample selection, which subsequently reduces the computational costs. We use the proposed criteria to select samples for strong and weak labelers to produce oracle labels and pseudo labels simultaneously at each active learning iteration in an ensemble learning manner, which can be examined with IoMT Platform. Extensive experiments on multiple medical image datasets demonstrate the superiority of the proposed method over state-of-the-art active learning methods.Broad learning systems (BLSs) have attracted considerable attention due to their powerful ability in efficient discriminative learning. In this article, a modified BLS with reinforcement learning signal feedback (BLRLF) is proposed as an efficient method for improving the performance of standard BLS. The main differences between our research and BLS are as follows. First, we add weight optimization after adding additional nodes or new training samples. Motivated by the weight iterative optimization in the convolution neural network (CNN), we use the output of the network as feedback while employing value iteration (VI)-based adaptive dynamic programming (ADP) to facilitate calculation of near-optimal increments of connection weights. Second, different from the homogeneous incremental algorithms in standard BLS, we integrate those broad expansion methods, and the heuristic search method is used to enable the proposed BLRLF to optimize the network structure autonomously. Although the training time is affected to a certain extent compared with BLS, the newly proposed BLRLF still retains a fast computational nature. Finally, the proposed BLRLF is evaluated using popular benchmarks from the UC Irvine Machine Learning Repository and many other challenging data sets. These results show that BLRLF outperforms many state-of-the-art deep learning algorithms and shallow networks proposed in recent years.Virtual environments (VE) and haptic interfaces (HI) tend to be introduced as virtual prototyping tools to assess ergonomic features of workstations. These approaches are cost-effective and convenient since working directly on the Digital Mock-Up in a VE is preferable to constructing a physical mock-up in a Real Environment (RE). However it can be usable only if the ergonomic conclusions made from the VE are similar to the ones you would make in the real world. This article aims at evaluating the impact of visual and haptic renderings in terms of biomechanical fidelity for pick-and-place tasks. Fourteen subjects performed time-constrained pick-and-place tasks in RE and VE with a real and a virtual, haptic driven object at three different speeds. Motion of the hand and muscles activation of the upper limb were recorded. A questionnaire assessed subjectively discomfort and immersion. The results revealed significant differences between measured indicators in RE and VE and with real and virtual object. Objective and subjective measures indicated higher muscle activity and higher length of the hand trajectories in VE and with HI. Another important element is that no cross effect between haptic and visual rendering was reported. Theses results confirmed that such systems should be used with caution for ergonomics evaluation, especially when investigating postural and muscle quantities as discomfort indicators. The last contribution of the paper lies in an experimental setup easily replicable to asses more systematically the biomechanical fidelity of virtual environments for ergonomics purposes.Some evidence has demonstrated that focal vibration (FV) plays an important role in the mitigation of spasticity. However, the research on developing the FV system to mitigate the spasticity effectively has been seldom reported. To relieve post-stroke spasticity, a new pneumatic FV system has been proposed in this paper. An image processing approach, in which the edge of vibration actuator was identified by the Canny edge detector, was utilized to quantify this system's parameters the frequency ranging from 44 Hz to 128 Hz and the corresponding amplitude. Taking one FV protocol with the frequency of 87 Hz and the amplitude 0.28 mm of this system as an example, a clinical experiment was carried out. In the clinical experiment, FV was applied over the muscle belly of the antagonist of spastic muscle for twelve chronic spastic stroke patients. Spasticity was quantified by the muscle compliance and area under the curve for muscle (AUC_muscle). The result has demonstrated that, in the state of flexion of spastic muscle, the AUC_muscle and muscle compliance of the spastic muscle significantly increased immediately after FV compared with before-FV, illustrating the mitigation of the spasticity. This study will not only provide a potential tool to relieve post-stroke spasticity, but also contribute to improving the sensory and motor function of patients with other neurological diseases, e.g. spinal cord injury, multiple sclerosis, Parkinson and dystonia, etc.With the rapid development of deep learning, more and more deep learning-based motor imagery electroencephalograph (EEG) decoding methods have emerged in recent years. However, the existing deep learning-based methods usually only adopt the constraint of classification loss, which hardly obtains the features with high discrimination and limits the improvement of EEG decoding accuracy. In this paper, a discriminative feature learning strategy is proposed to improve the discrimination of features, which includes the central distance loss (CD-loss), the central vector shift strategy, and the central vector update process. First, the CD-loss is proposed to make the same class of samples converge to the corresponding central vector. Then, the central vector shift strategy extends the distance between different classes of samples in the feature space. Finally, the central vector update process is adopted to avoid the non-convergence of CD-loss and weaken the influence of the initial value of central vectors on the final results. In addition, overfitting is another severe challenge for deep learning-based EEG decoding methods. this website To deal with this problem, a data augmentation method based on circular translation strategy is proposed to expand the experimental datasets without introducing any extra noise or losing any information of the original data. To validate the effectiveness of the proposed method, we conduct some experiments on two public motor imagery EEG datasets (BCI competition IV 2a and 2b dataset), respectively. The comparison with current state-of-the-art methods indicates that our method achieves the highest average accuracy and good stability on the two experimental datasets.We present a neural optimization model trained with reinforcement learning to solve the coordinate ordering problem for sets of star glyphs. Given a set of star glyphs associated to multiple class labels, we propose to use shape context descriptors to measure the perceptual distance between pairs of glyphs, and use the derived silhouette coefficient to measure the perception of class separability within the entire set. To find the optimal coordinate order for the given set, we train a neural network using reinforcement learning to reward orderings with high silhouette coefficients. The network consists of an encoder and a decoder with an attention mechanism. The encoder employs a recurrent neural network (RNN) to encode input shape and class information, while the decoder together with the attention mechanism employs another RNN to output a sequence with the new coordinate order. In addition, we introduce a neural network to efficiently estimate the similarity between shape context descriptors, which allows to speed up the computation of silhouette coefficients and thus the training of the axis ordering network. Two user studies demonstrate that the orders provided by our method are preferred by users for perceiving class separation. We tested our model on different settings to show its robustness and generalization abilities and demonstrate that it allows to order input sets with unseen data size, data dimension, or number of classes. We also demonstrate that our model can be adapted to coordinate ordering of other types of plots such as RadViz by replacing the proposed shape-aware silhouette coefficient with the corresponding quality metric to guide network training.When watching omnidirectional images (ODIs), subjects can access different viewports by moving their heads. Therefore, it is necessary to predict subjects' head fixations on ODIs. Inspired by generative adversarial imitation learning (GAIL), this paper proposes a novel approach to predict saliency of head fixations on ODIs, named SalGAIL. First, we establish a dataset for attention on ODIs (AOI). In contrast to traditional datasets, our AOI dataset is large-scale, which contains the head fixations of 30 subjects viewing 600 ODIs. Next, we mine our AOI dataset and discover three findings (1) the consistency of head fixations are consistent among subjects, and it grows alongside the increased subject number; (2) the head fixations exist with a front center bias (FCB); and (3) the magnitude of head movement is similar across the subjects. According to these findings, our SalGAIL approach applies deep reinforcement learning (DRL) to predict the head fixations of one subject, in which GAIL learns the reward of DRL, rather than the traditional human-designed reward. Then, multi-stream DRL is developed to yield the head fixations of different subjects, and the saliency map of an ODI is generated via convoluting predicted head fixations. Finally, experiments validate the effectiveness of our approach in predicting saliency maps of ODIs, significantly better than 11 state-of-the-art approaches. Our AOI dataset and code of SalGAIL are available online at https//github.com/yanglixiaoshen/SalGAIL.Due to the absence of a desirable objective for low-light image enhancement, previous data-driven methods may provide undesirable enhanced results including amplified noise, degraded contrast and biased colors. In this work, inspired by Retinex theory, we design an end-to-end signal prior-guided layer separation and data-driven mapping network with layer-specified constraints for single-image low-light enhancement. A Sparse Gradient Minimization sub-Network (SGM-Net) is constructed to remove the low-amplitude structures and preserve major edge information, which facilitates extracting paired illumination maps of low/normal-light images. After the learned decomposition, two sub-networks (Enhance-Net and Restore-Net) are utilized to predict the enhanced illumination and reflectance maps, respectively, which helps stretch the contrast of the illumination map and remove intensive noise in the reflectance map. The effects of all these configured constraints, including the signal structure regularization and losses, combine together reciprocally, which leads to good reconstruction results in overall visual quality.

Autoři článku: Frederickwhittaker1540 (Albert Slot)