Byerslaugesen3980

Z Iurium Wiki

Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criterion to evaluate the Re-ID system. Finally, some important yet under-investigated open issues are discussed.With the advent of deep learning, many dense prediction tasks, i.e. tasks that produce pixel-level predictions, have seen significant performance improvements. The typical approach is to learn these tasks in isolation, that is, a separate neural network is trained for each individual task. Yet, recent multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint, by jointly tackling multiple tasks through a learned shared representation. In this survey, we provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision, explicitly emphasizing on dense prediction tasks. Our contributions concern the following. First, we consider MTL from a network architecture point-of-view. We include an extensive overview and discuss the advantages/disadvantages of recent popular MTL models. Second, we examine various optimization methods to tackle the joint learning of multiple tasks. We summarize the qualitative elements of these works and explore their commonalities and differences. Finally, we provide an extensive experimental evaluation across a variety of dense prediction benchmarks to examine the pros and cons of the different methods, including both architectural and optimization based strategies.The Iterative Closest Point (ICP) algorithm and its variants are a fundamental technique for rigid registration between two point sets, with wide applications in different areas from robotics to 3D reconstruction. The main drawbacks for ICP are its slow convergence as well as its sensitivity to outliers, missing data, and partial overlaps. Recent work such as Sparse ICP achieves robustness via sparsity optimization at the cost of computational speed. In this paper, we propose a new method for robust registration with fast convergence. First, we show that the classical point-to-point ICP can be treated as a majorization-minimization (MM) algorithm, and propose an Anderson acceleration approach to speed up its convergence. In addition, we introduce a robust error metric based on the Welsch's function, which is minimized efficiently using the MM algorithm with Anderson acceleration. On challenging datasets with noises and partial overlaps, we achieve similar or better accuracy than Sparse ICP while being at least an order of magnitude faster. Finally, we extend the robust formulation to point-to-plane ICP, and solve the resulting problem using a similar Anderson-accelerated MM strategy. Our robust ICP methods improve the registration accuracy on benchmark datasets while being competitive in computational time.The convolutional neural network (CNN) has become a basic model for solving many computer vision problems. In recent years, a new class of CNNs, recurrent convolution neural network (RCNN), inspired by abundant recurrent connections in the visual systems of animals, was proposed. The critical element of RCNN is the recurrent convolutional layer (RCL), which incorporates recurrent connections between neurons in the standard convolutional layer. With increasing number of recurrent computations, the receptive fields (RFs) of neurons in RCL expand unboundedly, which is inconsistent with biological facts. We propose to modulate the RFs of neurons by introducing gates to the recurrent connections. The gates control the amount of context information inputting to the neurons and the neurons' RFs therefore become adaptive. The resulting layer is called gated recurrent convolution layer (GRCL). Multiple GRCLs constitute a deep model called gated RCNN (GRCNN). The GRCNN was evaluated on several computer vision tasks including object recognition, scene text recognition and object detection, and obtained much better results than the RCNN. In addition, when combined with other adaptive RF techniques, the GRCNN demonstrated competitive performance to the state-of-the-art models on benchmark datasets for these tasks.We consider the problem of referring segmentation in images and videos with natural language. Given an input image (or video) and a referring expression, the goal is to segment the entity referred by the expression in the image or video. In this paper, we propose a cross-modal self-attention (CMSA) module to utilize fine details of individual words and the input image or video, which effectively captures the long-range dependencies between linguistic and visual features. Our model can adaptively focus on informative words in the referring expression and important regions in the visual input. We further propose a gated multi-level fusion (GMLF) module to selectively integrate self-attentive cross-modal features corresponding to different levels of visual features. This module controls the feature fusion of information flow of features at different levels with high-level and low-level semantic information related to different attentive words. Besides, we introduce cross-frame self-attention (CFSA) module to effectively integrate temporal information in consecutive frames which extends our method in the case of referring segmentation in videos. Experiments on benchmark datasets of four referring image datasets and two actor and action video segmentation datasets consistently demonstrate that our proposed approach outperforms existing state-of-the-art methods.

Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions.

An article search was performed on 5 bibliographic databases with the following search terms robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling.

A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches.

The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. MELK inhibitor Important future research directions include detection and forecast of gesture-specific errors and anomalies.

This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field.

This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field.Ankle plantarflexion plays an important role in forward propulsion and anterior-posterior balance during locomotion. This component of gait is often critically impacted by neurotraumas and neurological diseases. We hypothesized that augmenting plantar cutaneous feedback, via closed-loop distal-tibial nerve stimulation, could increase ankle plantarflexion during walking. To test the hypothesis, one intact rat walked on a motorized treadmill with implanted electronic device and electrodes for closed-loop neural recording and stimulation. Constant-current biphasic electrical pulse train was applied to distal-tibial nerve, based on electromyogram recorded from the medial gastrocnemius muscle, to be timed with the stance phase. The stimulation current threshold to evoke plantar cutaneous feedback was set at 30 A (1T), based on compound action potential evoked by stimulation. The maximum ankle joint angle at plantarflexion, during the application of stimulation currents of 3.3T and 6.6T, respectively, was increased from 149.4 (baseline) to 165.4 and 161.6. The minimum ankle joint angle at dorsiflexion was decreased from 59.4 (baseline) to 53.1, during the application of stimulation currents of 3.3T, but not changed by 6.6T. Plantar cutaneous augmentation also changed other gait kinematic parameters. Stance duty factor was increased from 51.9% (baseline) to 65.7% and 64.0%, respectively, by 3.3T and 6.6T, primarily due to a decrease in swing duration. Cycle duration was consistently decreased by the stimulation. In the control trial after two stimulation trials, a strong after-effect was detected in overall gait kinematics as well as ankle plantarflexion, suggesting that this stimulation has the potential for producing long-term changes in gait kinematics.n this paper, we introduce an autonomous robotic ultrasound (US) imaging system based on reinforcement learning (RL). The proposed system and framework are committed to controlling the US probe to perform fully autonomous imaging of a soft, moving and marker-less target based only on single RGB images of the scene.

We propose several different approaches and methods to achieve the following objectives real-time US probe controlling, soft surface constant force tracking and automatic imaging. First, to express the state of the robotic US imaging task, a state representation model is proposed to reduce the dimensionality of the imaging state and encode the force and US information into the scene image space. Then, an RL agent is trained by a policy gradient theorem based RL model with the single RGB image as the only observation. To achieve adaptable constant force tracking between the US probe and the soft moving target, we propose a force-to-displacement control method based on an admittance controller.

In the simulation experiment, we verified the feasibility of the integrated method. Furthermore, we evaluated the proposed force-to-displacement method to demonstrate the safety and effectiveness of adaptable constant force tracking. Finally, we conducted phantom and volunteer experiments to verify the feasibility of the method on a real system.

The experiments indicated that our approaches were stable and feasible in the autonomic and accurate control of the US probe.

The proposed system has potential application value in the image-guided surgery and robotic surgery.

The proposed system has potential application value in the image-guided surgery and robotic surgery.

To investigate the effect of motivation on improvements in the Functional Independence Measure (FIM) scores in subacute stroke patients with cognitive impairment.

This retrospective cohort study included 358 consecutive subacute stroke patients with first-ever stroke and Mini-Mental State Examination score ≤23 at admission. We determined motivation and rehabilitation outcome using the vitality index and FIM-motor gain, respectively. Stepwise multiple regression analysis was performed to identify the factors at admission related to FIM-motor gain.

Of 80 participants enrolled in this study (mean age 74.2±11.3years). The median (interquartile range) vitality index at admission and FIM-motor gain were 7 (4) and 23 (22) points, respectively. Stepwise multiple regression analysis revealed that age (B, -0.43; 95% confidence interval [CI], -0.65-(-0.21); β, -0.31;

.001), duration from stroke onset to admission (B, -0.18; 95% CI, -0.33-(-0.04); β, -0.20;

.014) and Stroke Impairment Assessment Set-motor function (B, 1.

Autoři článku: Byerslaugesen3980 (Weinstein McKenna)