Lundehammer4512

Z Iurium Wiki

Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. Olaparib The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-independent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favorable results as verified by practitioners.In this paper, an adaptive admittance control scheme is developed for robots to interact with time-varying environments. Admittance control is adopted to achieve a compliant physical robot-environment interaction, and the uncertain environment with time-varying dynamics is defined as a linear system. A critic learning method is used to obtain the desired admittance parameters based on the cost function composed of interaction force and trajectory tracking without the knowledge of the environmental dynamics. To deal with dynamic uncertainties in the control system, a neural-network (NN)-based adaptive controller with a dynamic learning framework is developed to guarantee the trajectory tracking performance. Experiments are conducted and the results have verified the effectiveness of the proposed method.Visualizing objects as they are perceived in the real world is often critical in our daily experiences. We previously focused on objects' surface glossiness visualized with a 3D display and found that a multi-view 3D display reproduces perceived glossiness more accurately than a 2D display. This improvement of glossiness reproduction can be explained by the fact that a glossy surface visualized by a multi-view 3D display appropriately provides luminance differences between the two eyes and luminance changes accompanying the viewer's lateral head motion. In the present study, to determine the requirements of a multi-view 3D display for the accurate reproduction of perceived glossiness, we developed a simulator of a multi-view 3D display to independently and simultaneously manipulate the viewpoint interval and the magnitude of the optical inter-view crosstalk. Using the simulator, we conducted a psychophysical experiment and found that glossiness reproduction is most accurate when the viewpoint interval is small and there is just a small (but not too small) amount of crosstalk. We proposed a simple yet perceptually valid model that quantitatively predicts the reproduction accuracy of perceived glossiness.Face illumination perception and processing is a significantly difficult issue especially due to asymmetric shadings, local highlights, and local shadows. This study focuses on the face illumination transfer problem, which is to transfer the illumination style from a reference face image to a target face image while preserving other attributes. Such an instance-level transfer task is more challenging than the domain-level one that only considers the pre-defined lighting categories. To tackle this problem, we develop an instance-level conditional Generative Adversarial Networks (GAN). Specifically, face identifier is integrated into GAN learning, which enables an individual-specific low-level visual generation. Moreover, the illumination-inspired attention mechanism is conducted to allow GAN to well handle the local lighting effect. Our method requires neither lighting categorization, 3D information, nor strict face alignment, which are often employed by traditional methods. Experiments demonstrate that our method achieves significantly better results than previous methods.Matrix and tensor completion aim to recover the incomplete two- and higher-dimensional observations using the low-rank property. Conventional techniques usually minimize the convex surrogate of rank (such as the nuclear norm), which, however, leads to the suboptimal solution for the low-rank recovery. In this paper, we propose a new definition of matrix/tensor logarithmic norm to induce a sparsity-driven surrogate for rank. More importantly, the factor matrix/tensor norm surrogate theorems are derived, which are capable of factoring the norm of large-scale matrix/tensor into those of small-scale matrices/tensors equivalently. Based upon surrogate theorems, we propose two new algorithms called Logarithmic norm Regularized Matrix Factorization (LRMF) and Logarithmic norm Regularized Tensor Factorization (LRTF). These two algorithms incorporate the logarithmic norm regularization with the matrix/tensor factorization and hence achieve more accurate low-rank approximation and high computational efficiency. The resulting optimization problems are solved using the framework of alternating minimization with the proof of convergence. Simulation results on both synthetic and real-world data demonstrate the superior performance of the proposed LRMF and LRTF algorithms over the state-of-the-art algorithms in terms of accuracy and efficiency.Estimating depth and defocus maps are two fundamental tasks in computer vision. Recently, many methods explore these two tasks separately with the help of the powerful feature learning ability of deep learning and these methods have achieved impressive progress. However, due to the difficulty in densely labeling depth and defocus on real images, these methods are mostly based on synthetic training dataset, and the performance of learned network degrades significantly on real images. In this paper, we tackle a new task that jointly estimates depth and defocus from a single image. We design a dual network with two subnets respectively for estimating depth and defocus. The network is jointly trained on synthetic dataset with a physical constraint to enforce the physical consistency between depth and defocus. Moreover, we design a simple method to label depth and defocus order on real image dataset, and design two novel metrics to measure accuracies of depth and defocus estimation on real images. Comprehensive experiments demonstrate that joint training for depth and defocus estimation using physical consistency constraint enables these two subnets to guide each other, and effectively improves their depth and defocus estimation performance on real defocused image dataset.

Autoři článku: Lundehammer4512 (Baker Roed)