Haslundmathiesen4627

Z Iurium Wiki

Verze z 11. 10. 2024, 16:32, kterou vytvořil Haslundmathiesen4627 (diskuse | příspěvky) (Založena nová stránka s textem „Particle swarm optimizer (PSO) and mobile robot swarm are two typical swarm techniques. Many applications emerge separately along both of them while the si…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Particle swarm optimizer (PSO) and mobile robot swarm are two typical swarm techniques. Many applications emerge separately along both of them while the similarity between them is rarely considered. When a solution space is a certain region in reality, a robot swarm can replace a particle swarm to explore the optimal solution by performing PSO. In this way, a mobile robot swarm should be able to efficiently explore an area just like the particle swarm and uninterruptedly work even under the shortage of robots or in the case of unexpected failure of robots. Furthermore, the moving distances of robots are highly constrained because energy and time can be costly. Inspired by such requirements, this article proposes a moving-distance-minimized PSO (MPSO) for a mobile robot swarm to minimize the total moving distance of its robots while performing optimization. The distances between the current robot positions and the particle ones in the next generation are utilized to derive paths for robots such that the total distance that robots move is minimized, hence minimizing the energy and time for a robot swarm to locate the optima. Experiments on 28 CEC2013 benchmark functions show the advantage of the proposed method over the standard PSO. By adopting the given algorithm, the moving distance can be reduced by more than 66% and the makespan can be reduced by nearly 70% while offering the same optimization effects.Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks a generator and a discriminator. These two neural networks compete against each other through an adversarial process that can be modeled as a stochastic Nash equilibrium problem. Since the associated training process is challenging, it is fundamental to design reliable algorithms to compute an equilibrium. In this article, we propose a stochastic relaxed forward-backward (SRFB) algorithm for GANs, and we show convergence to an exact solution when an increasing number of data is available. We also show convergence of an averaged variant of the SRFB algorithm to a neighborhood of the solution when only a few samples are available. In both cases, convergence is guaranteed when the pseudogradient mapping of the game is monotone. This assumption is among the weakest known in the literature. Moreover, we apply our algorithm to the image generation problem.In this article, an optimized backstepping (OB) control scheme is proposed for a class of stochastic nonlinear strict-feedback systems with unknown dynamics by using reinforcement learning (RL) strategy of identifier-critic-actor architecture, where the identifier aims to compensate the unknown dynamic, the critic aims to evaluate the control performance and to give the feedback to the actor, and the actor aims to perform the control action. The basic control idea is that all virtual controls and the actual control of backstepping are designed as the optimized solution of corresponding subsystems so that the entire backstepping control is optimized. Different from the deterministic system, stochastic system control needs to consider not only the stochastic disturbance depicted by the Wiener process but also the Hessian term in stability analysis. If the backstepping control is developed on the basis of the published RL optimization methods, it will be difficult to be achieved because, on the one hand, RL of these methods are very complex in the algorithm thanks to their critic and actor updating laws deriving from the negative gradient of the square of approximation of Hamilton-Jacobi-Bellman (HJB) equation; on the other hand, these methods require persistence excitation and known dynamic, where persistence excitation is for training adaptive parameters sufficiently. In this research, both critic and actor updating laws are derived from the negative gradient of a simple positive function, which is yielded on the basis of a partial derivative of the HJB equation. As a result, the RL algorithm can be significantly simplified, meanwhile, two requirements of persistence excitation and known dynamic can be released. Lithium Chloride price Therefore, it can be a natural selection for stochastic optimization control. Finally, from two aspects of theory and simulation, it is demonstrated that the proposed control can arrive at the desired system performance.To achieve accurate and robust object detection in the real-world scenario, various forms of images are incorporated, such as color, thermal, and depth. However, multimodal data often suffer from the position shift problem, i.e., the image pair is not strictly aligned, making one object has different positions in different modalities. For the deep learning method, this problem makes it difficult to fuse multimodal features and puzzles the convolutional neural network (CNN) training. In this article, we propose a general multimodal detector named aligned region CNN (AR-CNN) to tackle the position shift problem. First, a region feature (RF) alignment module with adjacent similarity constraint is designed to consistently predict the position shift between two modalities and adaptively align the cross-modal RFs. Second, we propose a novel region of interest (RoI) jitter strategy to improve the robustness to unexpected shift patterns. Third, we present a new multimodal feature fusion method that selects the more reliable feature and suppresses the less useful one via feature reweighting. In addition, by locating bounding boxes in both modalities and building their relationships, we provide novel multimodal labeling named KAIST-Paired. Extensive experiments on 2-D and 3-D object detection, RGB-T, and RGB-D datasets demonstrate the effectiveness and robustness of our method.High-dimensional multilabel data have increasingly emerged in many application areas, suffering from two noteworthy issues instances with high-dimensional features and large-scale labels. Multilabel feature selection methods are widely studied to address the issues. Previous multilabel feature selection methods focus on exploring label correlations to guide the feature selection process, ignoring the impact of latent feature structure on label correlations. In addition, one encouraging property regarding correlations between features and labels is that similar features intend to share similar labels. To this end, a latent structure shared (LSS) term is designed, which shares and preserves both latent feature structure and latent label structure. Furthermore, we employ the graph regularization technique to guarantee the consistency between original feature space and latent feature structure space. Finally, we derive the shared latent feature and label structure feature selection (SSFS) method based on the constrained LSS term, and then, an effective optimization scheme with provable convergence is proposed to solve the SSFS method. Better experimental results on benchmark datasets are achieved in terms of multiple evaluation criteria.Exploration in environments with continuous control and sparse rewards remains a key challenge in reinforcement learning (RL). One of the approaches to encourage more systematic and efficient exploration relies on surprise as an intrinsic reward for the agent. We introduce a new definition of surprise and its RL implementation named variational assorted surprise exploration (VASE). VASE uses a Bayesian neural network as a model of the environment dynamics and is trained using variational inference, alternately updating the accuracy of the agent's model and policy. Our experiments show that in continuous control sparse reward environments, VASE outperforms other surprise-based exploration techniques.Semisupervised learning has been widely applied to deep generative model such as variational autoencoder. However, there are still limited work in noise-robust semisupervised deep generative model where the noise exists in both of the data and the labels simultaneously, which are referred to as outliers and noisy labels or compound noise. In this article, we propose a novel noise-robust semisupervised deep generative model by jointly tackling the noisy labels and outliers in a unified robust semisupervised variational autoencoder randomized generative adversarial network (URSVAE-GAN). Typically, we consider the uncertainty of the information of the input data in order to enhance the robustness of the variational encoder toward the noisy data in our unified robust semisupervised variational autoencoder (URSVAE). Subsequently, in order to alleviate the detrimental effects of noisy labels, a denoising layer is integrated naturally into the semisupervised variational autoencoder so that the variational inference is conditioned on the corrected labels. Moreover, to enhance the robustness of the variational inference in the presence of outliers, the robust β-divergence measure is employed to derive the novel variational lower bound, which already achieves competitive performance. This further motivates the development of URSVAE-GAN that collapses the decoder of URSVAE and the generator of a robust semisupervised generative adversarial network into one unit. By applying the end-to-end denoising scheme in the joint optimization, the experimental results demonstrate the superiority of the proposed framework by the evaluating on image classification and face recognition tasks and comparing with the state-of-the-art approaches.Non-Euclidean property of graph structures has faced interesting challenges when deep learning methods are applied. Graph convolutional networks (GCNs) can be regarded as one of the successful approaches to classification tasks on graph data, although the structure of this approach limits its performance. In this work, a novel representation learning approach is introduced based on spectral convolutions on graph-structured data in a semisupervised learning setting. Our proposed method, COnvOlving cLiques (COOL), is constructed as a neighborhood aggregation approach for learning node representations using established GCN architectures. This approach relies on aggregating local information by finding maximal cliques. Unlike the existing graph neural networks which follow a traditional neighborhood averaging scheme, COOL allows for aggregation of densely connected neighboring nodes of potentially differing locality. This leads to substantial improvements on multiple transductive node classification tasks.Ridge regression (RR) has been commonly used in machine learning, but is facing computational challenges in big data applications. To meet the challenges, this article develops a highly parallel new algorithm, i.e., an accelerated maximally split alternating direction method of multipliers (A-MS-ADMM), for a class of generalized RR (GRR) that allows different regularization factors for different regression coefficients. Linear convergence of the new algorithm along with its convergence ratio is established. Optimal parameters of the algorithm for the GRR with a particular set of regularization factors are derived, and a selection scheme of the algorithm parameters for the GRR with general regularization factors is also discussed. The new algorithm is then applied in the training of single-layer feedforward neural networks. Experimental results on performance validation on real-world benchmark datasets for regression and classification and comparisons with existing methods demonstrate the fast convergence, low computational complexity, and high parallelism of the new algorithm.

Autoři článku: Haslundmathiesen4627 (Wilcox Fuentes)