Bowlingjantzen5377

Z Iurium Wiki

Verze z 24. 9. 2024, 15:45, kterou vytvořil Bowlingjantzen5377 (diskuse | příspěvky) (Založena nová stránka s textem „Experiments with vibration data from a production wind farm provided by a company using condition monitoring system (CMS) show that the presented WPD-MSCNN…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Experiments with vibration data from a production wind farm provided by a company using condition monitoring system (CMS) show that the presented WPD-MSCNN method is superior to traditional CNN and multiscale CNN (MSCNN) for fault diagnosis.The automatic and accurate segmentation of the prostate cancer from the multi-modal magnetic resonance images is of prime importance for the disease assessment and follow-up treatment plan. However, how to use the multi-modal image features more efficiently is still a challenging problem in the field of medical image segmentation. In this paper, we develop a cross-modal self-attention distillation network by fully exploiting the encoded information of the intermediate layers from different modalities, and the generated attention maps of different modalities enable the model to transfer significant spatial information that contains more details. Moreover, a novel spatial correlated feature fusion module is further employed for learning more complementary correlation and non-linear information of different modality images. We evaluate our model in five-fold cross-validation on 358 MRI images with biopsy confirmed. Without bells and whistles, our proposed network achieves state-of-the-art performance on extensive experiments.This article addresses the distributed cooperative control design for a class of sampled-data teleoperation systems with multiple slave mobile manipulators grasping an object in the presence of communication bandwidth limitation and time delays. Discrete-time information transmission with time-varying delays is assumed, and the Round-Robin (RR) scheduling protocol is used to regulate the data transmission from the multiple slaves to the master. The control task is to guarantee the task-space position synchronization between the master and the grasped object with the mobile bases in a fixed formation. A fully distributed control strategy including neural-network-based task-space synchronization controllers and neural-network-based null-space formation controllers is proposed, where the radial basis function (RBF) neural networks with adaptive estimation of approximation errors are used to compensate the dynamical uncertainties. The stability and the synchronization/formation features of the single-master-multiple-slaves (SMMS) teleoperation system are analyzed, and the relationship among the control parameters, the upper bound of the time delays, and the maximum allowable sampling interval is established. Experiments are implemented to validate the effectiveness of the proposed control algorithm.Identifying independently moving objects is an essential task for dynamic scene understanding. However, traditional cameras used in dynamic scenes may suffer from motion blur or exposure artifacts due to their sampling principle. By contrast, event-based cameras are novel bio-inspired sensors that offer advantages to overcome such limitations. They report pixel-wise intensity changes asynchronously, which enables them to acquire visual information at exactly the same rate as the scene dynamics. We develop a method to identify independently moving objects acquired with an event-based camera, that is, to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two sub-problems, namely event-cluster assignment (labeling) and motion model fitting, in an iterative manner by exploiting the structure of the input event data in the form of a spatio-temporal graph. Experiments on available datasets demonstrate the versatility of the method in scenes with different motion patterns and number of moving objects. The evaluation shows state-of-the-art results without having to predetermine the number of expected moving objects. We release the software and dataset under an open source license to foster research in the emerging topic of event-based motion segmentation.Efficient exploration of unknown environments is a fundamental precondition for modern autonomous mobile robot applications. Aiming to design robust and effective robotic exploration strategies, suitable to complex real-world scenarios, the academic community has increasingly investigated the integration of robotics with reinforcement learning (RL) techniques. This survey provides a comprehensive review of recent research works that use RL to design unknown environment exploration strategies for single and multirobots. The primary purpose of this study is to facilitate future research by compiling and analyzing the current state of works that link these two knowledge domains. This survey summarizes what are the employed RL algorithms and how they compose the so far proposed mobile robot exploration strategies; how robotic exploration solutions are addressing typical RL problems like the exploration-exploitation dilemma, the curse of dimensionality, reward shaping, and slow learning convergence; and what are the performed experiments and software tools used for learning and testing. Achieved progress is described, and a discussion about remaining limitations and future perspectives is presented.In this article, we propose an efficient multiclass classification scheme based on sparse centroids classifiers. The proposed strategy exhibits linear complexity with respect to both the number of classes and the cardinality of the feature space. The classifier we introduce is based on binary space partitioning, performed by a decision tree where the assignation law at each node is defined via a sparse centroid classifier. We apply the presented strategy to the time series classification problem, showing by experimental evidence that it achieves performance comparable to that of state-of-the-art methods, but with a significantly lower classification time. mTOR cancer The proposed technique can be an effective option in resource-constrained environments where the classification time and the computational cost are critical or, in scenarios, where real-time classification is necessary.Recently, crowd counting using supervised learning achieves a remarkable improvement. Nevertheless, most counters rely on a large amount of manually labeled data. With the release of synthetic crowd data, a potential alternative is transferring knowledge from them to real data without any manual label. However, there is no method to effectively suppress domain gaps and output elaborate density maps during the transferring. To remedy the above problems, this article proposes a domain-adaptive crowd counting (DACC) framework, which consists of a high-quality image translation and density map reconstruction. To be specific, the former focuses on translating synthetic data to realistic images, which prompts the translation quality by segregating domain-shared/independent features and designing content-aware consistency loss. The latter aims at generating pseudo labels on real scenes to improve the prediction quality. Next, we retrain a final counter using these pseudo labels. Adaptation experiments on six real-world datasets demonstrate that the proposed method outperforms the state-of-the-art methods.Comparing competing mathematical models of complex processes is a shared goal among many branches of science. The Bayesian probabilistic framework offers a principled way to perform model comparison and extract useful metrics for guiding decisions. However, many interesting models are intractable with standard Bayesian methods, as they lack a closed-form likelihood function or the likelihood is computationally too expensive to evaluate. In this work, we propose a novel method for performing Bayesian model comparison using specialized deep learning architectures. Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset. Moreover, it requires no hand-crafted summary statistics of the data and is designed to amortize the cost of simulation over multiple models, datasets, and dataset sizes. This makes the method especially effective in scenarios where model fit needs to be assessed for a large number of datasets, so that case-based inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems. We demonstrate the utility of our method on toy examples and simulated data from nontrivial models from cognitive science and single-cell neuroscience. We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work. We argue that our framework can enhance and enrich model-based analysis and inference in many fields dealing with computational models of natural processes. We further argue that the proposed measure of epistemic uncertainty provides a unique proxy to quantify absolute evidence even in a framework which assumes that the true data-generating model is within a finite set of candidate models.In this work, a bionic memristive circuit with functions of emotional evolution is proposed by mimicking the emotional circuit in limbic system, which can perform unconscious and conscious emotional evolutions by using theories of internal regulation and external stimulation respectively. Two kinds of memristive models, volatile and non-volatile, play key roles in the process of emotional evolution. That is, the internal regulation is mainly responsible for simulating the unconscious evolution process over time by using the forgetting effect of the volatile memristor. The external stimulation is mainly responsible for using the memristance plasticity of the non-volatile memristor to simulate the evolutionary learning behavior under the action of multi-modal inputs (such as visual, speech and text signals), so as to realize the conscious emotional evolution. A two-dimensional (2D) emotional state space consisted of valence and arousal signals is adopted, the evolution behaviors are performed on the basis of valence and arousal signals in the space, in order to achieve continuous emotional evolution and express the evolved emotions intuitively. Due to the uses of memristors, the proposed circuit can realize in-memory computing, which fundamentally avoids the problem of storage wall and constructs a brain-inspired information processing architecture. The simulation results in PSPICE show that a nonlinear mapping relationship between inputs and outputs is constructed through the proposed circuit, which can carry out diversified emotional evolution based on the designed internal regulation and external stimulation evolution circuits.Three cochlear implant (CI) sound coding strategies were combined in the same signal processing path and compared for speech intelligibility with vocoded Mandarin sentences. The three CI coding strategies, biologically-inspired hearing aid algorithm (BioAid), envelope enhancement (EE), and fundamental frequency modulation (F0mod), were combined with the advanced combination encoder (ACE) strategy. Hence, four singular coding strategies and four combinational coding strategies were derived. Mandarin sentences with speech-shape noise were processed using these coding strategies. Speech understanding of vocoded Mandarin sentences was evaluated using short-time objective intelligibility (STOI) and subjective sentence recognition tests with normal-hearing listeners. For signal-to-noise ratios at 5 dB or above, the EE strategy had slightly higher average scores in both STOI and listening tests compared to ACE. The addition of EE to BioAid slightly increased the mean scores for BioAid+EE, which was the combination strategy with the highest scores in both objective and subjective speech intelligibility.

Autoři článku: Bowlingjantzen5377 (Galloway Mccarty)