Kornumhein3156
Subspace clustering is a popular method to discover underlying low-dimensional structures of high-dimensional multimedia data (e.g., images, videos, and texts). In this article, we consider a large-scale subspace clustering (LS²C) problem, that is, partitioning million data points with a millon dimensions. To address this, we explore an independent distributed and parallel framework by dividing big data/variable matrices and regularization by both columns and rows. Specifically, LS²C is independently decomposed into many subproblems by distributing those matrices into different machines by columns since the regularization of the code matrix is equal to a sum of that of its submatrices (e.g., square-of-Frobenius/ℓ₁-norm). Consensus optimization is designed to solve these subproblems in a parallel way for saving communication costs. Moreover, we provide theoretical guarantees that LS²C can recover consensus subspace representations of high-dimensional data points under broad conditions. Compared with the state-of-the-art LS²C methods, our approach achieves better clustering results in public datasets, including a million images and videos.This article investigates the resilient event-triggered (ET) distributed state estimation problem for nonlinear systems under denial-of-service (DoS) attacks. Different from the existing results mainly considering linear or specified nonlinear systems, more general nonlinear systems are considered in this study. Moreover, the considered DoS attacks are able to compromise different communication links among estimators independently. In this context, by resorting to the techniques of incremental homogeneity, a nonlinear ET distributed estimation scheme is designed to estimate the states and regulate the data transmission. Under this scheme, the resilient state estimation is achieved by employing a multimode switching estimator, and the problem of efficiency loss of the ET mechanism caused by DoS attacks is solved by designing a dynamic trigger threshold with switched update laws. Then, based on the decay rates of the Lyapunov function corresponding to different communication modes, sufficient conditions are given to guarantee the stability of the estimation error system under DoS attacks. Finally, simulation results are provided to verify the effectiveness of the proposed method.Concept drift refers to changes in the underlying data distribution of data streams over time. A well-trained model will be outdated if concept drift occurs. Once concept drift is detected, it is necessary to understand where the drift occurs to support the drift adaptation strategy and effectively update the outdated models. This process, called drift understanding, has rarely been studied in this area. To fill this gap, this article develops a drift region-based data sample filtering method to update the obsolete model and track the new data pattern accurately. The proposed method can effectively identify the drift region and utilize information on the drift region to filter the data sample for training models. The theoretical proof guarantees the identified drift region converges uniformly to the real drift region as the sample size increases. Experimental evaluations based on four synthetic datasets and two real-world datasets demonstrate our method improves the learning accuracy when dealing with data streams involving concept drift.In some practical systems, it often remains difficult to directly measure all state variables. This article investigates the memory output sliding-mode control (SMC) for the finite-time consensus of singularly perturbed multiagent systems (SPMASs). First, the virtual state-feedback sliding surface (SFSS) is constructed to ensure the consensus of all agent states. Then, the unknown output derivatives in SFSS are approximated by a moving finite difference method with error estimation and refinement, which gives rise to a new delay-dependent sliding surface. On this basis, the memory output switching control law is designed to stabilize the consensus errors in finite time, even in the presence of estimation biases, singular perturbations, and input noises. Different from the observer-based SMC, the proposed memory output SMC is of simple static form without introducing extra dynamical structures for state estimation. The effectiveness and superiority of the design method are verified in an SPMAS with double-integrator dynamics.Many scientific research and engineering problems can be converted to time-varying quadratic programming (TVQP) problems with constraints. Thus, TVQP problem solving plays an important role in practical applications. Many existing neural networks, such as the gradient neural network (GNN) or zeroing neural network (ZNN), were designed to solve TVQP problems, but the convergent rate is limited. The recent varying-parameter convergent-differential neural network (VP-CDNN) can accelerate the convergent rate, but it can only solve the equality-constrained problem. To remedy this deficiency, a novel barrier varying-parameter dynamic learning network (BVDLN) is proposed and designed, which can solve the equality-, inequality-, and bound-constrained problem. Specifically, the constrained TVQP problem is first converted into a matrix equation. Second, based on the modified Karush-Kuhn-Tucker (KKT) conditions and varying-parameter neural dynamic design method, the BVDLN model is conducted. Selleck OT-82 The superiorities of the proposed BVDLN model can solve multiple-constrained TVQP problems, and the convergent rate can achieve superexponentially convergence. Comparative simulative experiments verify that the proposed BVDLN is more effective and more accurate. Finally, the proposed BVDLN is applied to solve a robot motion planning problems, which verifies the applicability of the proposed model.The clique partitioning problem (CPP) of an edge-weighted complete graph is to partition the vertex set V into k disjoint subsets such that the sum of the edge weights within all cliques induced by the subsets is as large as possible. The problem has a number of practical applications in areas, such as data mining, engineering, and bioinformatics, and is, however, computationally challenging. To solve this NP-hard problem, we propose the first evolutionary algorithm that combines a dedicated merge-divide crossover operator to generate offspring solutions and an effective simulated annealing-based local optimization procedure to find high-quality local optima. The extensive experiments on three sets of 94 benchmark instances (including two sets of 63 classical benchmark instances and one new set of 31 large benchmark) show a remarkable performance of the proposed approach compared to the state-of-the-art methods. We analyze the key algorithmic ingredients to shed light on their impacts on the performance of the algorithm.