Ballputnam8604
Graph matching, or the determination of the vertex correspondences between a pair of graphs, is a crucial task in various problems in different science and engineering disciplines. This article aims to propose a distributed optimization approach for graph matching (GM) between two isomorphic graphs over multiagent networks. For this, we first show that for a class of asymmetric graphs, GM of two isomorphic graphs is equivalent to a convex relaxation where the set of permutation matrices is replaced by the set of pseudostochastic matrices. Then, we formulate GM as a distributed convex optimization problem with equality constraints and a set constraint, over a network of multiple agents. For arbitrary labelings of the vertices, each agent only has information about just one vertex and its neighborhood, and can exchange information with its neighbors. A projected primal-dual gradient method is developed to solve the constrained optimization problem, and globally exponential convergence of the agents' states to the optimal permutation is achieved. Finally, we illustrate the effectiveness of the algorithm through simulation examples.Recently, tensor sparsity modeling has achieved great success in the tensor completion (TC) problem. In real applications, the sparsity of a tensor can be rationally measured by low-rank tensor decomposition. However, existing methods either suffer from limited modeling power in estimating accurate rank or have difficulty in depicting hierarchical structure underlying such data ensembles. To address these issues, we propose a parametric tensor sparsity measure model, which encodes the sparsity for a general tensor by Laplacian scale mixture (LSM) modeling based on three-layer transform (TLT) for factor subspace prior with Tucker decomposition. Specifically, the sparsity of a tensor is first transformed into factor subspace, and then factor sparsity in the gradient domain is used to express the local similarity in within-mode. To further refine the sparsity, we adopt LSM by the transform learning scheme to self-adaptively depict deeper layer structured sparsity, in which the transformed sparse matrices in the sense of a statistical model can be modeled as the product of a Laplacian vector and a hidden positive scalar multiplier. We call the method as parametric tensor sparsity delivered by LSM-TLT. By a progressive transformation operator, we formulate the LSM-TLT model and use it to address the TC problem, and then the alternating direction method of multipliers-based optimization algorithm is designed to solve the problem. The experimental results on RGB images, hyperspectral images (HSIs), and videos demonstrate the proposed method outperforms state of the arts.A key issue in social network group decision making (SNGDM) is to determine the weights (i.e., social influences) of individuals. Notably, in some SNGDM scenarios, the social influences of individuals may evolve over time. Meanwhile, consensus reaching is another important issue in SNGDM. In this article, we are dedicated to disclosing the natural evolution process of social influence, and further to discussing the consensus reaching issue in SNGDM. First, we establish the social influence evolution model, where the individual's social influence is obtained by combining his/her intrinsic influence and network influence. Afterward, we design the consensus reaching process based on social influence evolution (CRP-SIE) to assist the individuals to reach a consensus. Furthermore, we use a hypothetical application to show the applicability of the proposed CRP-SIE. Finally, simulation analysis is adopted to investigate the effects of social influence evolution on consensus reaching in SNGDM, and comparative analysis is conducted to demonstrate the advantages of our proposal.This work presents a neuroadaptive tracking control scheme embedded with memory-based trajectory predictor for Euler-Lagrange (EL) systems to closely track an unknown target. The key synthesis steps are 1) using memory-based method to reconstruct the behavior of the unknown target based on its past trajectory information recorded/stored in the memory; 2) blending both speed transformation and barrier Lyapunov function (BLF) into the design and analysis; and 3) introducing a virtual parameter to reduce the number of online update parameters, rendering the strategy structurally simple and computationally inexpensive. It is shown that the resultant control scheme is able to ensure prescribed tracking performance in which close target tracking is achieved without the need for detailed information about system dynamics and the target trajectory; the tracking error converges to the prescribed precision set within a prespecified finite time at an assignable rate of convergence; and the full-state constraints are never violated. Furthermore, all the signals in the closed-loop system are bounded and the control action is C¹ smooth. The benefits and feasibility of the developed control are also verified and confirmed by simulation.Deep reinforcement learning (DRL) is a promising way to achieve human-like autonomous driving. However, the low sample efficiency and difficulty of designing reward functions for DRL would hinder its applications in practice. In light of this, this article proposes a novel framework to incorporate human prior knowledge in DRL, in order to improve the sample efficiency and save the effort of designing sophisticated reward functions. Our framework consists of three ingredients, namely, expert demonstration, policy derivation, and RL. In the expert demonstration step, a human expert demonstrates their execution of the task, and their behaviors are stored as state-action pairs. In the policy derivation step, the imitative expert policy is derived using behavioral cloning and uncertainty estimation relying on the demonstration data. In the RL step, the imitative expert policy is utilized to guide the learning of the DRL agent by regularizing the KL divergence between the DRL agent's policy and the imitative expertke autonomous driving systems in practice. The code and supplementary videos are also provided. [https//mczhi.github.io/Expert-Prior-RL/].Guided by the free-energy principle, generative adversarial networks (GAN)-based no-reference image quality assessment (NR-IQA) methods have improved the image quality prediction accuracy. However, the GAN cannot well handle the restoration task for the free-energy principle-guided NR-IQA methods, especially for the severely destroyed images, which results in that the quality reconstruction relationship between the distorted image and its restored image cannot be accurately built. To address this problem, a visual compensation restoration network (VCRNet)-based NR-IQA method is proposed, which uses a non-adversarial model to efficiently handle the distorted image restoration task. The proposed VCRNet consists of a visual restoration network and a quality estimation network. To accurately build the quality reconstruction relationship between the distorted image and its restored image, a visual compensation module, an optimized asymmetric residual block, and an error map-based mixed loss function, are proposed for increasing the restoration capability of the visual restoration network. For further addressing the NR-IQA problem of severely destroyed images, the multi-level restoration features which are obtained from the visual restoration network are used for the image quality estimation. To prove the effectiveness of the proposed VCRNet, seven representative IQA databases are used, and experimental results show that the proposed VCRNet achieves the state-of-the-art image quality prediction accuracy. The implementation of the proposed VCRNet has been released at https//github.com/NUIST-Videocoding/VCRNet.In this paper, we propose a relative pose estimation algorithm for micro-lens array (MLA)-based conventional light field (LF) cameras. First, by employing the matched LF-point pairs, we establish the LF-point-LF-point correspondence model to represent the correlation between LF features of the same 3D scene point in a pair of LFs. Then, we employ the proposed correspondence model to estimate the relative camera pose, which includes a linear solution and a non-linear optimization on manifold. Unlike prior related algorithms, which estimated relative poses based on the recovered depths of scene points, we adopt the estimated disparities to avoid the inaccuracy in recovering depths due to the ultra-small baseline between sub-aperture images of LF cameras. Experimental results on both simulated and real scene data have demonstrated the effectiveness of the proposed algorithm compared with classical as well as state-of-art relative pose estimation algorithms.Unsupervised image-to-image translation aims to learn the mapping from an input image in a source domain to an output image in a target domain without paired training dataset. Recently, remarkable progress has been made in translation due to the development of generative adversarial networks (GANs). However, existing methods suffer from the training instability as gradients passing from discriminator to generator become less informative when the source and target domains exhibit sufficiently large discrepancies in appearance or shape. To handle this challenging problem, in this paper, we propose a novel multi-constraint adversarial model (MCGAN) for image translation in which multiple adversarial constraints are applied at generator's multi-scale outputs by a single discriminator to pass gradients to all the scales simultaneously and assist generator training for capturing large discrepancies in appearance between two domains. We further notice that the solution to regularize generator is helpful in stabilizing adversarial training, but results may have unreasonable structure or blurriness due to less context information flow from discriminator to generator. Therefore, we adopt dense combinations of the dilated convolutions at discriminator for supporting more information flow to generator. With extensive experiments on three public datasets, cat-to-dog, horse-to-zebra, and apple-to-orange, our method significantly improves state-of-the-arts on all datasets.Classic image-restoration algorithms use a variety of priors, either implicitly or explicitly. Their priors are hand-designed and their corresponding weights are heuristically assigned. Hence, deep learning methods often produce superior image restoration quality. Deep networks are, however, capable of inducing strong and hardly predictable hallucinations. Networks implicitly learn to be jointly faithful to the observed data while learning an image prior; and the separation of original data and hallucinated data downstream is then not possible. This limits their wide-spread adoption in image restoration. Furthermore, it is often the hallucinated part that is victim to degradation-model overfitting. We present an approach with decoupled network-prior based hallucination and data fidelity terms. We refer to our framework as the Bayesian Integration of a Generative Prior (BIGPrior). Our method is rooted in a Bayesian framework and tightly connected to classic restoration methods. In fact, it can be viewed as a generalization of a large family of classic restoration algorithms.