Holtlinde8812
By applying the same categorization to interpretability in medical research, it is hoped that 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.Due to the existing effects of intermittent jumps of unknown parameters during operation, effectively establishing transient and steady-state tracking performances in control systems with unknown intermittent actuator faults is very important. In this article, two prescribed performance adaptive neural control schemes based on command-filtered backstepping are developed for a class of uncertain strict-feedback nonlinear systems. Under the condition of system states being available for feedback, the state feedback control scheme is investigated. When the system states are not directly measured, a cascade high-gain observer is designed to reconstruct the system states, and in turn, the output feedback control scheme is presented. Since the projection operator and modified Lyapunov function are, respectively, used in the adaptive law design and stability analysis, it is proven that both schemes can not only ensure the boundedness of all closed-loop signals but also confine the tracking errors within prescribed arbitrarily small residual sets for all the time even if there exist the effects of intermittent jumps of unknown parameters. Thus, the prescribed system transient and steady-state performances in the sense of the tracking errors are established. Furthermore, we also prove that the tracking performance under output feedback is able to recover the tracking performance under state feedback as the observer gain decreases. Simulation studies are done to verify the effectiveness of the theoretical discussions.The proliferation of location-aware social networks (LSNs) has facilitated the research of user mobility modeling and check-in prediction, thereby benefiting various downstream applications such as precision marketing and urban management. Most of the existing studies only focus on predicting the spatial aspect of check-ins, whereas the joint inference of the spatial and temporal aspects more fits the real application scenarios. Moreover, although social relations have been extensively studied in a recommender system, only a few efforts have been observed in the next check-in location prediction, leaving room for further improvement. In this article, we study the next check-in inference problem, which demands the joint inference of the next check-in location (Where) and time (When) for a target user (Who). We devise a model named ARNPP-GAT, which combines an attention-based recurrent neural point process with a graph attention networks. The core technical insight of ARNPP-GAT is to integrate user long-term representation learning, short-term behavior modeling, and temporal point process into a unified architecture. Specifically, ARNPP-GAT first leverages graph attention networks to learn the long-term representation of users by encoding their social relations. More importantly, the ARNPP endows the model with the capability of characterizing the effects of past check-in events and performing multitask learning to yield the next check-in time and location prediction. Empirical results on two real-world data sets demonstrate that ARNPP-GAT is superior compared with several competitors, validating the contributions of multitask learning and social relation modeling.Segmenting arbitrary 3D objects into constituent parts that are structurally meaningful is a fundamental problem encountered in a wide range of computer graphics applications. Existing methods for 3D shape segmentation suffer from complex geometry processing and heavy computation caused by using low-level features and fragmented segmentation results due to the lack of global consideration. We present an efficient method, called SEG-MAT, based on the medial axis transform (MAT) of the input shape. Specifically, with the rich geometrical and structural information encoded in the MAT, we are able to develop a simple and principled approach to effectively identify the various types of junctions between different parts of a 3D shape. Extensive evaluations and comparisons show that our method outperforms the state-of-the-art methods in terms of segmentation quality and is also one order of magnitude faster.A common operation performed in Virtual Reality (VR) environments is locomotion. Although real walking can represent a natural and intuitive way to manage displacements in such environments, its use is generally limited by the size of the area tracked by the VR system (typically, the size of a room) or requires expensive technologies to cover particularly extended settings. A number of approaches have been proposed to enable effective explorations in VR, each characterized by different hardware requirements and costs, and capable to provide different levels of usability and performance. However, the lack of a well-defined methodology for assessing and comparing available approaches makes it difficult to identify, among the various alternatives, the best solutions for selected application domains. To deal with this issue, this article introduces a novel evaluation testbed which, by building on the outcomes of many separate works reported in the literature, aims to support a comprehensive analysis of the considered design space. An experimental protocol for collecting objective and subjective measures is proposed, together with a scoring system able to rank locomotion approaches based on a weighted set of requirements. Testbed usage is illustrated in a use case requesting to select the technique to adopt in a given application scenario.We propose a novel approach to reconstructing 3D motion data from a flexible magnetic flux sensor array using deep learning and a structure-aware temporal bilateral filter. Computing the 3D configuration of markers (inductor-capacitor (LC) coils) from flux sensor data is difficult because the existing numerical approaches suffer from system noise, dead angles, the need for initialization, and limitations in the sensor array's layout. We solve these issues with deep neural networks to learn the regression from the simulation flux values to the LC coils' 3D configuration, which can be applied to the actual LC coils at any location and orientation within the capture volume. To cope with the influence of system noise and the dead-angle limitation caused by the characteristics of the hardware and sensing principle, we propose a structure-aware temporal bilateral filter for reconstructing motion sequences. Our method can track various movements, including fingers that manipulate objects, beetles that move inside a vivarium with leaves and soil, and the flow of opaque fluid. Since no power supply is needed for the lightweight wireless markers, our method can robustly track movements for a very long time, making it suitable for various types of observations whose tracking is difficult with existing motion-tracking systems. Furthermore, the flexibility of the flux sensor layout allows users to reconfigure it based on their own applications, thus making our approach suitable for a variety of virtual reality applications.Over the last decade growing amounts of government data have been made available in an attempt to increase transparency and civic participation, but it is unclear if this data serves non-expert communities due to gaps in access and the technical knowledge needed to interpret this "open" data. We conducted a two-year design study focused on the creation of a community-based data display using the United States Environmental Protection Agency data on water permit violations by oil storage facilities on the Chelsea Creek in Massachusetts to explore whether situated data physicalization and Participatory Action Research could support meaningful engagement with open data. We selected this data as it is of interest to local groups and available online, yet remains largely invisible and inaccessible to the Chelsea community. The resulting installation, Chemicals in the Creek, responds to the call for community-engaged visualization processes and provides an application of situated methods of data representation. It proposes event-centered and power-aware modes of engagement using contextual and embodied data representations. The design of Chemicals in the Creek is grounded in interactive workshops and we analyze it through event observation, interviews, and community outcomes. We reflect on the role of community engaged research in the Information Visualization community relative to recent conversations on new approaches to design studies and evaluation.The collection and visual analysis of large-scale data from complex systems, such as electronic health records or clickstream data, has become increasingly common across a wide range of industries. This type of retrospective visual analysis, however, is prone to a variety of selection bias effects, especially for high-dimensional data where only a subset of dimensions is visualized at any given time. The risk of selection bias is even higher when analysts dynamically apply filters or perform grouping operations during ad hoc analyses. selleck chemicals These bias effects threaten the validity and generalizability of insights discovered during visual analysis as the basis for decision making. Past work has focused on bias transparency, helping users understand when selection bias may have occurred. However, countering the effects of selection bias via bias mitigation is typically left for the user to accomplish as a separate process. Dynamic reweighting (DR) is a novel computational approach to selection bias mitigation that helps users craft bias-corrected visualizations. This paper describes the DR workflow, introduces key DR visualization designs, and presents statistical methods that support the DR process. Use cases from the medical domain, as well as findings from domain expert user interviews, are also reported.Infographic is a data visualization technique which combines graphic and textual descriptions in an aesthetic and effective manner. Creating infographics is a difficult and time-consuming process which often requires significant attempts and adjustments even for experienced designers, not to mention novice users with limited design expertise. Recently, a few approaches have been proposed to automate the creation process by applying predefined blueprints to user information. However, predefined blueprints are often hard to create, hence limited in volume and diversity. In contrast, good infogrpahics have been created by professionals and accumulated on the Internet rapidly. These online examples often represent a wide variety of design styles, and serve as exemplars or inspiration to people who like to create their own infographics. Based on these observations, we propose to generate infographics by automatically imitating examples. We present a two-stage approach, namely retrieve-then-adapt. In the retrieval stage, we index online examples by their visual elements. For a given user information, we transform it to a concrete query by sampling from a learned distribution about visual elements, and then find appropriate examples in our example library based on the similarity between example indexes and the query. For a retrieved example, we generate an initial drafts by replacing its content with user information. However, in many cases, user information cannot be perfectly fitted to retrieved examples. Therefore, we further introduce an adaption stage. Specifically, we propose a MCMC-like approach and leverage recursive neural networks to help adjust the initial draft and improve its visual appearance iteratively, until a satisfactory result is obtained. We implement our approach on widely-used proportion-related infographics, and demonstrate its effectiveness by sample results and expert reviews.