Kellynordentoft1000

Z Iurium Wiki

In this paper we present an approach to jointly recover camera pose, 3D shape, and object and deformation type grouping, from incomplete 2D annotations in a multi-instance collection of RGB images. Our approach is able to handle indistinctly both rigid and non-rigid categories. This advances existing work, which only addresses the problem for one single object or, they assume the groups to be known a priori when multiple instances are handled. In order to address this broader version of the problem, we encode object deformation by means of multiple unions of subspaces, that is able to span from small rigid motion to complex deformations. The model parameters are learned via Augmented Lagrange Multipliers, in a completely unsupervised manner that does not require any training data at all. Extensive experimental evaluation is provided in a wide variety of synthetic and real scenarios, including rigid and non-rigid categories with small and large deformations. We obtain state-of-the-art solutions in terms of 3D reconstruction accuracy, while also providing grouping results that allow splitting the input images into object instances and their associated type of deformation.Achieving human-like visual abilities is a holy grail for machine vision, yet precisely how insights from human vision can improve machines has remained unclear. Here, we demonstrate two key conceptual advances First, we show that most machine vision models are systematically different from human object perception. To do so, we collected a large dataset of perceptual distances between isolated objects in humans and asked whether these perceptual data can be predicted by many common machine vision algorithms. We found that while the best algorithms explain ~70% of the variance in the perceptual data, all the algorithms we tested make systematic errors on several types of objects. In particular, machine algorithms underestimated distances between symmetric objects compared to human perception. Second, we show that fixing these systematic biases can lead to substantial gains in classification performance. In particular, augmenting a state-of-the-art convolutional neural network with planar/reflection symmetry scores along multiple axes produced significant improvements in classification accuracy (1-10%) across categories. These results show that machine vision can be improved by discovering and fixing systematic differences from human vision.Rendering bridges the gap between 2D vision and 3D scenes by simulating the physical process of image formation. By inverting such renderer, one can think of a learning approach to infer 3D information from 2D images. However, standard graphics renderers involve a fundamental step called rasterization, which prevents rendering to be differentiable. Unlike the state-of-the-art differentiable renderers, which only approximate the rendering gradient in the backpropagation, we propose a natually differentiable rendering framework that is able to (1) directly render colorized mesh using differentiable functions and (2) back-propagate efficient supervisions to mesh vertices and their attributes from various forms of image representations. The key to our framework is a novel formulation that views rendering as an aggregation function that fuses the probabilistic contributions of all mesh triangles with respect to the rendered pixels. Such formulation enables our framework to flow gradients to the occluded and distant vertices, which cannot be achieved by the previous state-of-the-arts. We show that by using the proposed renderer, one can achieve significant improvement in 3D unsupervised single-view reconstruction both qualitatively and quantitatively. Experiments also demonstrate that our approach can handle the challenging tasks in image-based shape fitting, which remain nontrivial to existing differentiable renders.Data clustering, which is to partition the given data into different groups, has attracted much attention. Recently various effective algorithms have been developed to tackle the task. Among these methods, non-negative matrix factorization (NMF) has been demonstrated to be a powerful tool. However, there are still some problems. First, the standard NMF is sensitive to noises and outliers. Although L2,1 norm based NMF improves the robustness, it is still affected easily by large noises. Second, for most graph regularized NMF, the performance highly depends on the initial similarity graph. Third, many graph-based NMF models perform the graph construction and matrix factorization in two separated steps. Thus the learned graph structure may not be optimal. To overcome the above drawbacks, we propose a robust bi-stochastic graph regularized matrix factorization (RBSMF) framework for data clustering. Specifically, we present a general loss function, which is more robust than the commonly used L 2 and L 1 functions. Besides, instead of keeping the graph fixed, we learn an adaptive similarity graph. Furthermore, the graph updating and matrix factorization are processed simultaneously, which can make the learned graph more appropriate for clustering. Extensive experiments have shown the proposed RBSMF outperforms other state-of-the-art methods.Multi-Task Learning attempts to explore and mine the sufficient information within multiple related tasks for the better solutions. However, the performance of the existing multi-task approaches would largely degenerate when dealing with the polluted data, i.e., outliers. In this paper, we propose a novel robust multi-task model by incorporating a flexible manifold constraint (FMC-MTL) and a robust loss. Specifically speaking, multi-task subspace is embedded with a relaxed and generalized Stiefel Manifold for considering point-wise correlation and preserving the data structure simultaneously. In addition, a robust loss function is developed to ensure the robustness to outliers by smoothly interpolating between l2,1 -norm and squared Frobenius norm. Equipped with an efficient algorithm, FMC-MTL serves as a robust solution to tackling the severely polluted data. Moreover, extensive experiments are conducted to verify the superiority of our model. Compared to the state-of-the-art multi-task models, the proposed FMC-MTL model demonstrates remarkable robustness to the contaminated data.Intelligent agents need to understand the surrounding environment to provide meaningful services to or interact intelligently with humans. The agents should perceive geometric features as well as semantic entities inherent in the environment. Contemporary methods in general provide one type of information regarding the environment at a time, making it difficult to conduct high-level tasks. Moreover, running two types of methods and associating two resultant information requires a lot of computation and complicates the software architecture. To overcome these limitations, we propose a neural architecture that simultaneously performs both geometric and semantic tasks in a single thread simultaneous visual odometry, object detection, and instance segmentation (SimVODIS). SimVODIS is built on top of Mask-RCNN which is trained in a supervised manner. Training the pose and depth branches of SimVODIS requires unlabeled video sequences and the photometric consistency between input image frames generates self-supervision signals. The performance of SimVODIS outperforms or matches the state-of-the-art performance in pose estimation, depth map prediction, object detection, and instance segmentation tasks while completing all the tasks in a single thread. We expect SimVODIS would enhance the autonomy of intelligent agents and let the agents provide effective services to humans.In this paper, we propose to leverage freely available unlabeled video data to facilitate few-shot video classification. In this semi-supervised few-shot video classification task, millions of unlabeled data are available for each episode during training. These videos can be extremely imbalanced, while they have profound visual and motion dynamics. To tackle the semi-supervised few-shot video classification problem, we make the following contributions. First, we propose a label independent memory (LIM) to cache label related features, which enables a similarity search over a large set of videos. LIM produces a class prototype for few-shot training. This prototype is an aggregated embedding for each class, which is more robust to noisy video features. Second, we integrate a multi-modality compound memory network to capture both RGB and flow information. We propose to store the RGB and flow representation in two separate memory networks, but they are jointly optimized via a unified loss. In this way, mutual communications between the two modalities are leveraged to achieve better classification performance. Third, we conduct extensive experiments on the few-shot Kinetics-100, Something-Something-100 datasets, which validates the effectiveness of leveraging the accessible unlabeled data for few-shot classification.Exploiting multi-scale representations is critical to improve edge detection for objects at different scales. To extract edges at dramatically different scales, we propose a Bi-Directional Cascade Network (BDCN) architecture, where an individual layer is supervised by labeled edges at its specific scale, rather than directly applying the same supervision to different layers. Furthermore, to enrich multi-scale representations learned by each layer of BDCN, we introduce a Scale Enhancement Module (SEM), which utilizes dilated convolution to generate multi-scale features, instead of using deeper CNNs. These new approaches encourage the learning of multi-scale representations in different layers and detect edges that are well delineated by their scales. Learning scale dedicated layers also results in a compact network with a fraction of parameters. We evaluate our method on three datasets, i.e., BSDS500, NYUDv2, and Multicue, and achieve ODS F-measure of 0.832, 2.7% higher than current state-of-the-art on the BSDS500 dataset. We also applied our edge detection result to other vision tasks. Experimental results show that, our method further boosts the performance of image segmentation, optical flow estimation, and object proposal generation.Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a Criss-Cross Network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. BLU9931 Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11x less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85\% of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid.

Autoři článku: Kellynordentoft1000 (Wiese Kristoffersen)