Bennetsendaniel1578

Z Iurium Wiki

This article develops a dynamic version of event-triggered model predictive control (MPC) without utilizing any terminal constraint. Such a dynamic event-triggering mechanism takes the advantages of both event- and self-triggering approaches by dealing explicitly with conservatism in the triggering rate and measurement frequency. The prediction horizon shrinks as the system states converge; we prove that the proposed strategy is able to stabilize the system even without any stability-related terminal constraint. Recursive feasibility of the optimization control problem (OCP) is also guaranteed. The simulation results illustrate the effectiveness of the scheme.This article studies a distributed model-predictive control (DMPC) strategy for a class of discrete-time linear systems subject to globally coupled constraints. To reduce the computational burden, the constraint tightening technique is adopted for enabling the early termination of the distributed optimization algorithm. Using the Lagrangian method, we convert the constrained optimization problem of the proposed DMPC to an unconstrained saddle-point seeking problem. Due to the presence of the global dual variable in the Lagrangian function, we propose a primal-dual algorithm based on the Laplacian consensus to solve such a problem in a distributed manner by introducing the local estimates of the dual variable. We theoretically show the geometric convergence of the primal-dual gradient optimization algorithm by the contraction theory in the context of discrete-time updating dynamics. The exact convergence rate is obtained, leading the stopping number of iterations to be bounded. The recursive feasibility of the proposed DMPC strategy and the stability of the closed-loop system can be established pursuant to the inexact solution. Numerical simulation demonstrates the performance of the proposed strategy.Object clustering has received considerable research attention most recently. However, 1) most existing object clustering methods utilize visual information while ignoring important tactile modality, which would inevitably lead to model performance degradation and 2) simply concatenating visual and tactile information via multiview clustering method can make complementary information to not be fully explored, since there are many differences between vision and touch. To address these issues, we put forward a graph-based visual-tactile fused object clustering framework with two modules 1) a modality-specific representation learning module MR and 2) a unified affinity graph learning module MU. Specifically, MR focuses on learning modality-specific representations for visual-tactile data, where deep non-negative matrix factorization (NMF) is adopted to extract the hidden information behind each modality. Meanwhile, we employ an autoencoder-like structure to enhance the robustness of the learned representations, and two graphs to improve its compactness. Furthermore, MU highlights how to mitigate the differences between vision and touch, and further maximize the mutual information, which adopts a minimizing disagreement scheme to guide the modality-specific representations toward a unified affinity graph. To achieve ideal clustering performance, a Laplacian rank constraint is imposed to regularize the learned graph with ideal connected components, where noises that caused wrong connections are removed and clustering labels can be obtained directly. Finally, we propose an efficient alternating iterative minimization updating strategy, followed by a theoretical proof to prove framework convergence. Comprehensive experiments on five public datasets demonstrate the superiority of the proposed framework.By training different models and averaging their predictions, the performance of the machine-learning algorithm can be improved. The performance optimization of multiple models is supposed to generalize further data well. This requires the knowledge transfer of generalization information between models. In this article, a multiple kernel mutual learning method based on transfer learning of combined mid-level features is proposed for hyperspectral classification. Three-layer homogenous superpixels are computed on the image formed by PCA, which is used for computing mid-level features. The three mid-level features include 1) the sparse reconstructed feature; 2) combined mean feature; and 3) uniqueness. The sparse reconstruction feature is obtained by a joint sparse representation model under the constraint of three-scale superpixels' boundaries and regions. The combined mean features are computed with average values of spectra in multilayer superpixels, and the uniqueness is obtained by the superposed manifold ranking values of multilayer superpixels. Next, three kernels of samples in different feature spaces are computed for mutual learning by minimizing the divergence. Then, a combined kernel is constructed to optimize the sample distance measurement and applied by employing SVM training to build classifiers. Experiments are performed on real hyperspectral datasets, and the corresponding results demonstrated that the proposed method can perform significantly better than several state-of-the-art competitive algorithms based on MKL and deep learning.People can infer the weather from clouds. Various weather phenomena are linked inextricably to clouds, which can be observed by meteorological satellites. Thus, cloud images obtained by meteorological satellites can be used to identify different weather phenomena to provide meteorological status and future projections. How to classify and recognize cloud images automatically, especially with deep learning, is an interesting topic. Generally speaking, large-scale training data are essential for deep learning. However, there is no such cloud images database to date. Thus, we propose a large-scale cloud image database for meteorological research (LSCIDMR). To the best of our knowledge, it is the first publicly available satellite cloud image benchmark database for meteorological research, in which weather systems are linked directly with the cloud images. LSCIDMR contains 104,390 high-resolution images, covering 11 classes with two different annotation methods 1) single-label annotation and 2) multiple-label annotation, called LSCIDMR-S and LSCIDMR-M, respectively. The labels are annotated manually, and we obtain a total of 414,221 multiple labels and 40,625 single labels. Several representative deep learning methods are evaluated on the proposed LSCIDMR, and the results can serve as useful baselines for future research. Furthermore, experimental results demonstrate that it is possible to learn effective deep learning models from a sufficiently large image database for the cloud image classification.Clustering is one of the fundamental tasks in computer vision and pattern recognition. Recently, deep clustering methods (algorithms based on deep learning) have attracted wide attention with their impressive performance. Most of these algorithms combine deep unsupervised representation learning and standard clustering together. However, the separation of representation learning and clustering will lead to suboptimal solutions because the two-stage strategy prevents representation learning from adapting to subsequent tasks (e.g., clustering according to specific cues). To overcome this issue, efforts have been made in the dynamic adaption of representation and cluster assignment, whereas current state-of-the-art methods suffer from heuristically constructed objectives with the representation and cluster assignment alternatively optimized. To further standardize the clustering problem, we audaciously formulate the objective of clustering as finding a precise feature as the cue for cluster assignment. Based on this, we propose a general-purpose deep clustering framework, which radically integrates representation learning and clustering into a single pipeline for the first time. The proposed framework exploits the powerful ability of recently developed generative models for learning intrinsic features, and imposes an entropy minimization on the distribution of the cluster assignment by a dedicated variational algorithm. The experimental results show that the performance of the proposed method is superior, or at least comparable to, the state-of-the-art methods on the handwritten digit recognition, fashion recognition, face recognition, and object recognition benchmark datasets.In this article, a robust k-winner-take-all (k-WTA) neural network employing the saturation-allowed activation functions is designed and investigated to perform a k-WTA operation, and is shown to possess enhanced robustness to disturbance compared to existing k-WTA neural networks. Global convergence and robustness of the proposed k-WTA neural network are demonstrated through analysis and simulations. An application studied in detail is competitive multiagent coordination and dynamic task allocation, in which k active agents [among m (m > k)] are allocated to execute a tracking task with the static m-k ones. This is implemented by adopting a distributed k-WTA network with limited communication, aided with a consensus filter. Simulation results demonstrating the system's efficacy and feasibility are presented.This work proposes a novel event-triggered exponential supertwisting algorithm (ESTA) for path tracking of a mobile robot. The proposed work is divided into three parts. In the first part, a fractional-order sliding surface-based exponential supertwisting event-triggered controller has been proposed. Fractional-order sliding surface improves the transient response, and the exponential supertwisting reaching law reduces the reaching phase time and eliminates the chattering. The event-triggering condition is derived using the Lipschitz method for minimum actuator utilization, and the interexecution time between two events is derived. In the second part, a fault estimator is designed to estimate the actuator fault using the Lyapunov stability theory. Furthermore, it is shown that in the presence of matched and unmatched uncertainty, event-trigger-based controller performance degrades. Hence, in the third part, an integral sliding-mode controller (ISMC) has been clubbed with the event-trigger ESTA for filtering of the uncertainties. It is also shown that when fault estimator-based ESTA is clubbed with ISMC, then the robustness of the controller increases, and the tracking performance improves. This novel technique is robust toward uncertainty and fault, offers finite-time convergence, reduces chattering, and offers minimum resource utilization. Simulations and experimental studies are carried out to validate the advantages of the proposed controller over the existing methods.We present Arianna⁺, a framework to design networks of ontologies for representing knowledge enabling smart homes to perform human activity recognition online. In the network, nodes are ontologies allowing for various data contextualisation, while edges are general-purpose computational procedures elaborating data. Arianna⁺ provides a flexible interface between the inputs and outputs of procedures and statements, which are atomic representations of ontological knowledge. Arianna⁺ schedules procedures on the basis of events by employing logic-based reasoning, that is, by checking the classification of certain statements in the ontologies. Each procedure involves input and output statements that are differently contextualized in the ontologies based on specific prior knowledge. Arianna⁺ allows to design networks that encode data within multiple contexts and, as a reference scenario, we present a modular network based on a spatial context shared among all activities and a temporal context specialized for each activity to be recognized.

Autoři článku: Bennetsendaniel1578 (Raahauge Strong)