Emborgbates5184

Z Iurium Wiki

Verze z 12. 9. 2024, 19:02, kterou vytvořil Emborgbates5184 (diskuse | příspěvky) (Založena nová stránka s textem „Visible infrared person reidentification (VI-REID) plays a critical role in night-time surveillance applications. Most methods attempt to reduce the cross-…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Visible infrared person reidentification (VI-REID) plays a critical role in night-time surveillance applications. Most methods attempt to reduce the cross-modality gap by extracting the modality-shared features. However, they neglect the distinct image-level discrepancies among heterogeneous pedestrian images. In this article, we propose a reciprocal bidirectional framework (RBDF) to achieve modality unification before discriminative feature learning. The bidirectional image translation subnetworks can learn two opposite mappings between visible and infrared modality. Particularly, we investigate the characteristics of the latent space and design a novel associated loss to pull close the distribution between the intermediate representations of two mappings. Mutual interaction between two opposite mappings helps the network generate heterogeneous images that have high similarity with the real images. Hence, the concatenation of original and generated images can eliminate the modality gap. During the feature learning procedure, the attention mechanism-based feature embedding network can learn more discriminative representations with the identity classification and feature metric learning. Experimental results indicate that our method achieves state-of-the-art performance. For instance, we achieve 54.41% mAP and 57.66% rank-1 accuracy on SYSU-MM01 dataset, outperforming the existing works by a large margin.This article addresses the scaled consensus problem for a class of heterogeneous multiagent systems (MASs) with a cascade-type two-layer structure. It is assumed that the information of the upper layer state components is intermittently exchangeable through a strongly connected communication network among the agents. A distributed hierarchical hybrid control framework is proposed, which consists of a lower layer controller and an upper layer one. The lower layer controller is a decentralized continuous feedback controller, which makes the lower layer state components converge to their target values. The upper layer controller is a distributed impulsive controller, which enforces a scaled consensus for the upper layer state components. It is proved that the two layer controllers can be designed separately. By considering the dwell-time condition of impulses and the feature of the strongly connected Laplacian matrix, a novel weighted discontinuous function is constructed for scaled consensus analysis. By using the Lyapunov function, a sufficient condition for scaled consensus of the MAS is derived in terms of linear matrix inequalities. As an application of the proposed distributed hybrid control strategy, a relaxed distributed hybrid secondary control algorithm for dc microgrid is obtained, by which the balance requirement on the communication digraph is removed, and an improved current sharing condition is obtained.Deep metric learning turns to be attractive in zero-shot image retrieval and clustering (ZSRC) task in which a good embedding/metric is requested such that the unseen classes can be distinguished well. Most existing works deem this "good" embedding just to be the discriminative one and race to devise the powerful metric objectives or the hard-sample mining strategies for learning discriminative deep metrics. However, in this article, we first emphasize that the generalization ability is also a core ingredient of this "good" metric and it largely affects the metric performance in zero-shot settings as a matter of fact. Then, we propose the confusion-based metric learning (CML) framework to explicitly optimize a robust metric. It is mainly achieved by introducing two interesting regularization terms, i.e., the energy confusion (EC) and diversity confusion (DC) terms. These terms daringly break away from the traditional deep metric learning idea of designing discriminative objectives and instead seek to "confuse" the learned model. These two confusion terms focus on local and global feature distribution confusions, respectively. We train these confusion terms together with the conventional deep metric objective in an adversarial manner. Although it seems weird to "confuse" the model learning, we show that our CML indeed serves as an efficient regularization framework for deep metric learning and it is applicable to various conventional metric methods. This article empirically and experimentally demonstrates the importance of learning an embedding/metric with good generalization, achieving the state-of-the-art performances on the popular CUB, CARS, Stanford Online Products, and In-Shop datasets for ZSRC tasks.Unknown examples that are unseen during training often appear in real-world pattern recognition tasks, and an intelligent self-learning system should be able to distinguish between known examples and unknown examples. Accordingly, open-set recognition (OSR), which addresses the problem of classifying knowns and identifying unknowns, has recently been highlighted. However, conventional deep neural networks (DNNs) using a softmax layer are vulnerable to overgeneralization, producing high confidence scores for unknowns. In this article, we propose a simple OSR method that is based on the intuition that the OSR performance can be maximized by setting strict and sophisticated decision boundaries that reject unknowns while maintaining satisfactory classification performance for knowns. For this purpose, a novel network structure, in which multiple one-vs-rest networks (OVRNs) follow a convolutional neural network (CNN) feature extractor, is proposed. Here, an OVRN is a simple feedforward neural network that is designed to assign confidence scores that are lower than those in the softmax layer to unknown samples so that unknown samples can be more effectively separated from known classes. Furthermore, the collective decision score is modeled by combining the multiple decisions reached by the OVRNs to alleviate overgeneralization. Extensive experiments were conducted on various datasets, and the experimental results show that the proposed method performs significantly better than the state-of-the-art methods by effectively reducing overgeneralization. The code is available at https//github.com/JaeyeonJang/Openset-collective-decision.Knowledge distillation (KD) has become a widely used technique for model compression and knowledge transfer. We find that the standard KD method performs the knowledge alignment on an individual sample indirectly via class prototypes and neglects the structural knowledge between different samples, namely, knowledge correlation. Although recent contrastive learning-based distillation methods can be decomposed into knowledge alignment and correlation, their correlation objectives undesirably push apart representations of samples from the same class, leading to inferior distillation results. To improve the distillation performance, in this work, we propose a novel knowledge correlation objective and introduce the dual-level knowledge distillation (DLKD), which explicitly combines knowledge alignment and correlation together instead of using one single contrastive objective. We show that both knowledge alignment and correlation are necessary to improve the distillation performance. In particular, knowledge correlation can serve as an effective regularization to learn generalized representations. The proposed DLKD is task-agnostic and model-agnostic, and enables effective knowledge transfer from supervised or self-supervised pretrained teachers to students. Experiments show that DLKD outperforms other state-of-the-art methods on a large number of experimental settings including 1) pretraining strategies; 2) network architectures; 3) datasets; and 4) tasks.The simultaneous-source technology for high-density seismic acquisition is a key solution to efficient seismic surveying. It is a cost-effective method when blended subsurface responses are recorded within a short time interval using multiple seismic sources. A following deblending process, however, is needed to separate signals contributed by individual sources. Recent advances in deep learning and its data-driven approach toward feature engineering have led to many new applications for a variety of seismic processing problems. It is still a challenge, though, to collect enough labeled data and avoid model overfitting and poor generalization performance over different datasets with a low resemblance from each other. In this article, we propose a novel self-supervised learning method to solve the deblending problem without labeled training datasets. Using a blind-trace deep neural network and a carefully crafted blending loss function, we demonstrate that the individual source-response pairs can be accurately separated under three different blended-acquisition designs.This article aims to unify spatial dependency and temporal dependency in a non-Euclidean space while capturing the inner spatial-temporal dependencies for traffic data. For spatial-temporal attribute entities with topological structure, the space-time is consecutive and unified while each node's current status is influenced by its neighbors' past states over variant periods of each neighbor. Most spatial-temporal neural networks for traffic forecasting study spatial dependency and temporal correlation separately in processing, gravely impaired the spatial-temporal integrity, and ignore the fact that the neighbors' temporal dependency period for a node can be delayed and dynamic. To model this actual condition, we propose TraverseNet, a novel spatial-temporal graph neural network, viewing space and time as an inseparable whole, to mine spatial-temporal graphs while exploiting the evolving spatial-temporal dependencies for each node via message traverse mechanisms. Experiments with ablation and parameter studies have validated the effectiveness of the proposed TraverseNet, and the detailed implementation can be found from https//github.com/nnzhan/TraverseNet.This article studies the hierarchical sliding-mode surface (HSMS)-based adaptive optimal control problem for a class of switched continuous-time (CT) nonlinear systems with unknown perturbation under an actor-critic (AC) neural networks (NNs) architecture. First, a novel perturbation observer with a nested parameter adaptive law is designed to estimate the unknown perturbation. PI3K inhibitor Then, by constructing an especial cost function related to HSMS, the original control issue is further converted into the problem of finding a series of optimal control policies. The solution to the HJB equation is identified by the HSMS-based AC NNs, where the actor and critic updating laws are developed to implement the reinforcement learning (RL) strategy simultaneously. The critic update law is designed via the gradient descent approach and the principle of standardization, such that the persistence of excitation (PE) condition is no longer needed. Based on the Lyapunov stability theory, all the signals of the closed-loop switched nonlinear systems are strictly proved to be bounded in the sense of uniformly ultimate boundedness (UUB). Finally, the simulation results are presented to verify the validity of the proposed adaptive optimal control scheme.Subsampling is an important technique to tackle the computational challenges brought by big data. Many subsampling procedures fall within the framework of importance sampling, which assigns high sampling probabilities to the samples appearing to have big impacts. When the noise level is high, those sampling procedures tend to pick many outliers and thus often do not perform satisfactorily in practice. To tackle this issue, we design a new Markov subsampling strategy based on Huber criterion (HMS) to construct an informative subset from the noisy full data; the constructed subset then serves as refined working data for efficient processing. HMS is built upon a Metropolis-Hasting procedure, where the inclusion probability of each sampling unit is determined using the Huber criterion to prevent over scoring the outliers. Under mild conditions, we show that the estimator based on the subsamples selected by HMS is statistically consistent with a sub-Gaussian deviation bound. The promising performance of HMS is demonstrated by extensive studies on large-scale simulations and real data examples.

Autoři článku: Emborgbates5184 (Duelund Mckay)