Adairrosen5852
Vibrotactile stimuli can be used to generate the haptic sensation of a static object or the motion of a dynamic object. Here, in this article, we investigated the effects of vibratory frequency and temporal interval on tactile apparent motion. In the experiment, we examined the effect of vibratory frequency with different temporal intervals on tactile apparent motion that results from two successive tactile stimuli on the index fingerpad. Results indicated that tactile apparent motion was perceived not only when both stimuli were either "flutter" or "vibration" stimuli, but also when one of each type was used. Specifically, when the first stimulus was introduced at 40Hz, "continuous motion" was viewed at all combinations of stimulus frequency, and "continuous motion" was clearly noted at the high-frequency combination instead of the low-frequency combination. Also, tactile apparent motion was predominantly viewed in the SOA range of 105 ms to 125 ms. We anticipate that our findings and further research will be essential resources for the design of tactile devices to represent the motion of dynamic objects.Realistic 3D facial modeling and animation have been increasingly used in many graphics, animation, and virtual reality applications. However, generating realistic fine-scale wrinkles on 3D faces, in particular, on animated 3D faces, is still a challenging problem that is far away from being resolved. In this paper we propose an end-to-end system to automatically augment coarse-scale 3D faces with synthesized fine-scale geometric wrinkles. By formulating the wrinkle generation problem as a supervised generation task, we implicitly model the continuous space of face wrinkles via a compact generative model, such that plausible face wrinkles can be generated through effective sampling and interpolation in the space. We also introduce a complete pipeline to transfer the synthesized wrinkles between faces with different shapes and topologies. Through many experiments, we demonstrate our method can robustly synthesize plausible fine-scale wrinkles on a variety of coarse-scale 3D faces with different shapes and expressions.Visual analytics enables the coupling of machine learning models and humans in a tightly integrated workflow, addressing various analysis tasks. Selleck BMH-21 Each task poses distinct demands to analysts and decision-makers. In this survey, we focus on one canonical technique for rule-based classification, namely decision tree classifiers. We provide an overview of available visualizations for decision trees with a focus on how visualizations differ with respect to 16 tasks. Further, we investigate the types of visual designs employed, and the quality measures presented. We find that (i) interactive visual analytics systems for classifier development offer a variety of visual designs, (ii) utilization tasks are sparsely covered, (iii) beyond classifier development, node-link diagrams are omnipresent, (iv) even systems designed for machine learning experts rarely feature visual representations of quality measures other than accuracy. In conclusion, we see a potential for integrating algorithmic techniques, mathematical quality measures, and tailored interactive visualizations to enable human experts to utilize their knowledge more effectively.To the best of our knowledge, the existing deep-learning-based Video Super-Resolution (VSR) methods exclusively make use of videos produced by the Image Signal Processor (ISP) of the camera system as inputs. Such methods are 1) inherently suboptimal due to information loss incurred by non-invertible operations in ISP, and 2) inconsistent with the real imaging pipeline where VSR in fact serves as a pre-processing unit of ISP. To address this issue, we propose a new VSR method that can directly exploit camera sensor data, accompanied by a carefully built Raw Video Dataset (RawVD) for training, validation, and testing. This method consists of a Successive Deep Inference (SDI) module and a reconstruction module, among others. The SDI module is designed according to the architectural principle suggested by a canonical decomposition result for Hidden Markov Model (HMM) inference; it estimates the target high-resolution frame by repeatedly performing pairwise feature fusion using deformable convolutions. The reconstruction module, built with elaborately designed Attention-based Residual Dense Blocks (ARDBs), serves the purpose of 1) refining the fused feature and 2) learning the color information needed to generate a spatial-specific transformation for accurate color correction. Extensive experiments demonstrate that owing to the informativeness of the camera raw data, the effectiveness of the network architecture, and the separation of super-resolution and color correction processes, the proposed method achieves superior VSR results compared to the state-of-the-art and can be adapted to any specific camera-ISP. Code and dataset are available at https//github.com/proteus1991/RawVSR.Siamese trackers contain two core stages, i.e., learning the features of both target and search inputs at first and then calculating response maps via the cross-correlation operation, which can also be used for regression and classification to construct typical one-shot detection tracking framework. Although they have drawn continuous interest from the visual tracking community due to the proper trade-off between accuracy and speed, both stages are easily sensitive to the distracters in search branch, thereby inducing unreliable response positions. To fill this gap, we advance Siamese trackers with two novel non-local blocks named Nocal-Siam, which leverages the long-range dependency property of the non-local attention in a supervised fashion from two aspects. First, a target-aware non-local block (T-Nocal) is proposed for learning the target-guided feature weights, which serve to refine visual features of both target and search branches, and thus effectively suppress noisy distracters. This block reinforces the interplay between both target and search branches in the first stage. Second, we further develop a location-aware non-local block (L-Nocal) to associate multiple response maps, which prevents them inducing diverse candidate target positions in the future coming frame. Experiments on five popular benchmarks show that Nocal-Siam performs favorably against well-behaved counterparts both in quantity and quality.