Warmingskaarup8523

Z Iurium Wiki

This inherent strategy of the brain controller allowed for precise positioning of the COP within the reduced size of the target. In conclusion, the dynamics of the ROT movement is always precisely adjusted to the stability of the upright posture, and thus, the dynamic characteristics of the COP step response are also sensitive measures of postural stability and the ROT can be recommended as a useful test for this assessment in the general population.Optimizing rendering performance is critical for a wide variety of virtual reality (VR) applications. Foveated rendering is emerging as an indispensable technique for reconciling interactive frame rates with ever-higher head-mounted display resolutions. Here, we present a simple yet effective technique for further reducing the cost of foveated rendering by leveraging ocular dominance - the tendency of the human visual system to prefer scene perception from one eye over the other. Our new approach, eye-dominance-guided foveated rendering (EFR), renders the scene at a lower foveation level (with higher detail) for the dominant eye than the non-dominant eye. Compared with traditional foveated rendering, EFR can be expected to provide superior rendering performance while preserving the same level of perceived visual quality.In this work, we tackle the problem of person search, which is a challenging task consisted of pedestrian detection and person re-identification (re-ID). Instead of sharing representations in a single joint model, we find that separating detector and re-ID feature extraction yields better performance. In order to extract more representative features for each identity, we segment out the foreground person from the original image patch. We propose a simple yet effective re-ID method, which models foreground person and original image patches individually, and obtains enriched representations from two separate CNN streams. We also propose a Confidence Weighted Stream Attention method which further re-adjusts the relative importance of the two streams by incorporating the detection confidence. Furthermore, we simplify the whole pipeline by incorporating semantic segmentation into the re-ID network, which is trained by bounding boxes as weakly-annotated masks and identification labels simultaneously. From the experiments on two standard person search benchmarks i.e. CUHK-SYSU and PRW, we achieve mAP of 83.3% and 32.8% respectively, surpassing the state of the art by a large margin. The extensive ablation study and model inspection further justifies our motivation.Region-based methods have become the state-of-art solution for monocular 6-DOF object pose tracking in recent years. WP1130 However, two main challenges still remain the robustness to heterogeneous configurations (both foreground and background), and the robustness to partial occlusions. In this paper, we propose a novel region-based monocular 3D object pose tracking method to tackle these problems. Firstly, we design a new strategy to define local regions, which is simple yet efficient in constructing discriminative local color histograms. Contrary to previous methods which define multiple circular regions around the object contour, we propose to define multiple overlapped, fan-shaped regions according to polar coordinates. This local region partitioning strategy produces much less number of local regions that need to be maintained and updated, while still being temporally consistent. Secondly, we propose to detect occluded pixels using edge distance and color cues. The proposed occlusion detection strategy is seamlessly integrated into the region-based pose optimization pipeline via a pixel-wise weight function, which significantly alleviates the interferences caused by partial occlusions. We demonstrate the effectiveness of the proposed two new strategies with a careful ablation study. Furthermore, we compare the performance of our method with the most recent state-of-art region-based methods in a recently released large dataset, in which the proposed method achieves competitive results with a higher average tracking success rate. Evaluations on two real-world datasets also show that our method is capable of handling realistic tracking scenarios.Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams.The code will be made publicly available at https//github.com/chenhong-zhou/OM-Net.Image blur caused by camera movement is common in long-exposure photography. A recent approach to address image blur is to record camera motion via inertial sensors in imaging equipment such as smartphones and single-lens reflex (SLR) cameras. However, because of device performance limitations, directly estimating a blur kernel from sensor data is infeasible. Previous works that have attempted to correct blurry image content via sensor data have also been susceptible to theoretical defects. Here, we propose a novel method of deblurring images that uses inertial sensors and a short-long-short (SLS) exposure strategy. Assisted short-exposure images captured before and after the formal long-exposure image are employed to correct the sensor data. link2 A half-blind deconvolution algorithm is proposed to refine the estimated kernel. An extra smoothing filter is integrated into the framework to address the coarse initial kernel. Hence, we propose a fast solution for optimization that uses the iteratively reweighted least squares (IRLS) method in the frequency domain. We evaluate these methods via several blind deconvolutions. Quantitative indicators and the visual performance of the image deblurring results show that our method performs better than previous methods in terms of image quality restoration and computational time cost. This method will increase the feasibility of applying deblurring to imaging devices.Recently, Fully Convolutional Network (FCN) seems to be the go-to architecture for image segmentation, including semantic scene parsing. However, it is difficult for a generic FCN to predict semantic labels around the object boundaries, thus FCN-based methods usually produce parsing results with inaccurate boundaries. Meanwhile, many works have demonstrate that level set based active contours are superior to the boundary estimation in sub-pixel accuracy. However, they are quite sensitive to initial settings. To address these limitations, in this paper we propose a novel Deep Multiphase Level Set (DMLS) method for semantic scene parsing, which efficiently incorporates multiphase level sets into deep neural networks. The proposed method consists of three modules, i.e., recurrent FCNs, adaptive multiphase level set, and deeply supervised learning. More specifically, recurrent FCNs learn multi-level representations of input images with different contexts. Adaptive multiphase level set drives the discriminative contour for each semantic class, which makes use of the advantages of both global and local information. In each time-step of the recurrent FCNs, deeply supervised learning is incorporated for model training. Extensive experiments on three public benchmarks have shown that our proposed method achieves new state-of-the-art performances. The source codes will be released at https//github.com/Pchank/DMLS-for-SSP.Piezoelectricity in bone is thought to be a mechanism by which ultrasound promotes the healing of bone fractures. However, few studies have been conducted in the more clinically relevant MHz range. To understand the piezoelectricity in bone, we fabricated ultrasound transducers using bone samples as piezoelectric materials and identified the longitudinal ultrasound radiation and reception in the MHz range. The maximum transmitting sensitivity of the bone transducer was 140 mPa/V, which was nearly 1/1000 of a polyvinylidene difluoride (PVDF) transducer that has better electrical properties and piezoelectricity. The resonance frequencies of the transducer depend on the plate thickness and angle between the bone axis (alignment direction of the hydroxyapatite crystallites) and ultrasound propagation direction, reflecting the anisotropic character of the bone. The reception and transmission sensitivities of the bone transducers also depend on the plate thickness and angle, showing maximum values at off-axis angles. These results indicate the existence of both piezoelectricity and inverse piezoelectricity in bone, which may be key factors in understanding the bone healing by lowintensity biophysical (electrical or mechanical) stimulation.Doppler ultrasound is the most common technique for non-invasive quantification of blood flow, which in turn is of major clinical importance for the assessment of the cardiovascular condition. link3 In this paper, a method is proposed in which the vessel is imaged in the short axis, which has the advantage of capturing the whole flow profile while measuring the vessel area simultaneously. This view is easier to obtain than the longitudinal image that is currently used in flow velocity estimation, reducing the operator-dependency. However, the Doppler angle in cross-sectional images is unknown since the vessel wall can no longer be used to estimate the flow direction. The proposed method to estimate the Doppler angle in these images is based on the elliptical intersection between a cylindrical vessel and the ultrasound plane. The parameters of this ellipse (major axis, minor axis and rotation) are used to estimate the Doppler angle by solving a least-squares problem. Theoretical feasibility was shown in a geometrical model, after which the Doppler angle was estimated in simulated ultrasound images generated in Field II, yielding a mean error within 4.

Autoři článku: Warmingskaarup8523 (Monroe McGrath)