Pachecoaarup4612

Z Iurium Wiki

In applying the technique in lean meats cancer, find 3105 triplet connections. The world thinks miRCoop can help our own comprehension of the complicated regulatory Tanespimycin nmr friendships in different health insurance disease says in the cellular and can help out with designing miRNA-based solutions. Signal is available \urlhttps//github.com/guldenolgun.Recovery of the upper extremity (UE) and hand function is considered the highest priority for people with tetraplegia, because these functions closely integrate with their activities of daily living. Spinal cord transcutaneous stimulation (scTS) has great potential to facilitate functional restoration of paralyzed limbs by neuro-modulating the excitability of the spinal network. Recently, this approach has been demonstrated effective in improving UE function in people with motor complete and incomplete cervical SCI. However, the research thus far is limited by the lack of a comprehensive assessment of functional improvement and neurological recovery throughout the intervention. The goal of this study was to investigate whether scTS can also facilitate UE functional restoration in an individual with motor and sensory complete tetraplegia. A 38-year-old male with a C5 level, ASIA Impairment Scale-A SCI (15 years post-injury, left hand dominant pre- and post-injury), received 18 sessions (60 minutes/session) of scTS combined with task-specific hand training over the course of 8 weeks. The total score of the Graded Redefined Assessment of Strength, Sensibility, and Prehension significantly improved from 72/232 to 96/232 at post-intervention, and maintained ranging from 82/232 to 86/232 during the three months follow-up without any further treatment. The bilateral handgrip force improved by 283.4% (left) and 30.7% (right), respectively at post-intervention. These strength gains were sustained at 233.5% -250% (left) and 11.5%-73.1% (right) during the follow-up evaluation visits. Neuromuscular Recovery Scale demonstrated dramatic and long-lasting improvements following the completion of the intervention. Changes of spinal motor evoked potentials from pre- to post-intervention indicated an increased level of spinal network excitability. The present data offer preliminary evidence that the novel scTS intervention combined with hand training can enhance UE functional use in people with motor and sensory complete SCI.Existing studies have demonstrated that eye tracking can be a complementary approach to Electroencephalogram (EEG) based brain-computer interaction (BCI), especially in improving BCI performance in visual perception and cognition. In this paper, we proposed a method to fuse EEG and eye movement data extracted from motor imagery (MI) tasks. The results of the tests showed that on the feature layer, the average MI classification accuracy from the fusion of EEG and eye movement data was higher than that of pure EEG data or pure eye movement data, respectively. Besides, we also found that the average classification accuracy from the fusion on the decision layer was higher than that from the feature layer. Additionally, when EEG data were not available for the shifting of parts of electrodes, we combined EEG data collected from the rest of the electrodes (only 50% of the original) with the eye movement data, and the average MI classification accuracy was only 1.07% lower than that from all available electrodes. This result indicated that eye movement data was feasible to compensate for the loss of the EEG data in the MI scenario. Overall our approach was proved valuable and useful for augmenting MI based BCI applications.As an instance-level recognition problem, re-identification (re-ID) requires models to capture diverse features. However, with continuous training, re-ID models pay more and more attention to the salient areas. As a result, the model may only focus on few small regions with salient representations and ignore other important information. This phenomenon leads to inferior performance, especially when models are evaluated on small inter-identity variation data. In this paper, we propose a novel network, Erasing-Salient Net (ES-Net), to learn comprehensive features by erasing the salient areas in an image. ES-Net proposes a novel method to locate the salient areas by the confidence of objects and erases them efficiently in a training batch. Meanwhile, to mitigate the over-erasing problem, this paper uses a trainable pooling layer P-pooling that generalizes global max and global average pooling. Experiments are conducted on two specific re-identification tasks (i.e., Person re-ID, Vehicle re-ID). Our ES-Net outperforms state-of-the-art methods on three Person re-ID benchmarks and two Vehicle re-ID benchmarks. Specifically, mAP / Rank-1 rate 88.6% / 95.7% on Market1501, 78.8% / 89.2% on DuckMTMC-reID, 57.3% / 80.9% on MSMT17, 81.9% / 97.0% on Veri-776, respectively. Rank-1 / Rank-5 rate 83.6% / 96.9% on VehicleID (Small), 79.9% / 93.5% on VehicleID (Medium), 76.9% / 90.7% on VehicleID (Large), respectively. Moreover, the visualized salient areas show human-interpretable visual explanations for the ranking results.In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.Although deep neural networks have achieved great success on numerous large-scale tasks, poor interpretability is still a notorious obstacle for practical applications. In this paper, we propose a novel and general attention mechanism, loss-based attention, upon which we modify deep neural networks to mine significant image patches for explaining which parts determine the image decision-making. This is inspired by the fact that some patches contain significant objects or their parts for image-level decision. Unlike previous attention mechanisms that adopt different layers and parameters to learn weights and image prediction, the proposed loss-based attention mechanism mines significant patches by utilizing the same parameters to learn patch weights and logits (class vectors), and image prediction simultaneously, so as to connect the attention mechanism with the loss function for boosting the patch precision and recall. Additionally, different from previous popular networks that utilize max-pooling or stride operations in convolutional layers without considering the spatial relationship of features, the modified deep architectures first remove them to preserve the spatial relationship of image patches and greatly reduce their dependencies, and then add two convolutional or capsule layers to extract their features. With the learned patch weights, the image-level decision of the modified deep architectures is the weighted sum on patches. Extensive experiments on large-scale benchmark databases demonstrate that the proposed architectures can obtain better or competitive performance to state-of-the-art baseline networks with better interpretability. The source codes are available on https//github.com/xsshi2015/Loss-based-Attention-for-Interpreting-Image-level-Prediction-of-Convolutional-Neural-Networks.To improve the coding performance of depth maps, 3D-HEVC includes several new depth intra coding tools at the expense of increased complexity due to a flexible quadtree Coding Unit/Prediction Unit (CU/PU) partitioning structure and a huge number of intra mode candidates. Compared to natural images, depth maps contain large plain regions surrounded by sharp edges at the object boundaries. Our observation finds that the features proposed in the literature either speed up the CU/PU size decision or intra mode decision and they are also difficult to make proper predictions for CUs/PUs with the multi-directional edges in depth maps. In this work, we reveal that the CUs with multi-directional edges are highly correlated with the distribution of corner points (CPs) in the depth map. CP is proposed as a good feature that can guide to split the CUs with multi-directional edges into smaller units until only single directional edge remains. This smaller unit can then be well predicted by the conventional intra mode. Besides, a fast intra mode decision is also proposed for non-CP PUs, which prunes the conventional HEVC intra modes, skips the depth modeling mode decision, and early determines segment-wise depth coding. Furthermore, a two-step adaptive corner point selection technique is designed to make the proposed algorithm adaptive to frame content and quantization parameters, with the capability of providing the flexible tradeoff between the synthesized view quality and complexity. Simulation results show that the proposed algorithm can provide about 66% time reduction of the 3D-HEVC intra encoder without incurring noticeable performance degradation for synthesized views and it also outperforms the previous state-of-the-art algorithms in term of time reduction and ∆ BDBR.With the assistance of sophisticated training methods applied to single labeled datasets, the performance of fully-supervised person re-identification (Person Re-ID) has been improved significantly in recent years. However, these models trained on a single dataset usually suffer from considerable performance degradation when applied to videos of a different camera network. To make Person Re-ID systems more practical and scalable, several cross-dataset domain adaptation methods have been proposed, which achieve high performance without the labeled data from the target domain. However, these approaches still require the unlabeled data of the target domain during the training process, making them impractical. A practical Person Re-ID system pre-trained on other datasets should start running immediately after deployment on a new site without having to wait until sufficient images or videos are collected and the pre-trained model is tuned. To serve this purpose, in this paper, we reformulate person re-identification as a multi-dataset domain generalization problem. We propose a multi-dataset feature generalization network (MMFA-AAE), which is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to 'unseen' camera systems. The network is based on an adversarial auto-encoder to learn a generalized domain-invariant latent feature representation with the Maximum Mean Discrepancy (MMD) measure to align the distributions across multiple domains. Extensive experiments demonstrate the effectiveness of the proposed method. Our MMFA-AAE approach not only outperforms most of the domain generalization Person Re-ID methods, but also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.Extreme instance imbalance among categories and combinatorial explosion make the recognition of Human-Object Interaction (HOI) a challenging task. Few studies have addressed both challenges directly. Motivated by the success of few-shot learning that learns a robust model from a few instances, we formulate HOI as a few-shot task in a meta-learning framework to alleviate the above challenges. Due to the fact that the intrinsical characteristic of HOI is diverse and interactive, we propose a Semantic-guided Attentive Prototypes Network (SAPNet) framework to learn a semantic-guided metric space where HOI recognition can be performed by computing distances to attentive prototypes of each class. Specifically, the model generates attentive prototypes guided by the category names of actions and objects, which highlight the commonalities of images from the same class in HOI. In addition, we design two alternative prototypes calculation methods, i.e., Prototypes Shift (PS) approach and Hallucinatory Graph Prototypes (HGP) approach, which explore to learn a suitable category prototypes representations in HOI. Finally, in order to realize the task of few-shot HOI, we reorganize 2 HOI benchmark datasets with 2 split strategies, i.e., HICO-NN, TUHOI-NN, HICO-NF, and TUHOI-NF. Extensive experimental results on these datasets have demonstrated the effectiveness of our proposed SAPNet approach.A dynamic model to analyze the thickness-shear vibration of a circular quartz crystal plate with multiple concentric ring electrodes on its upper and bottom surfaces is established with the aid of coordinate transformation. The theoretical solution is obtained, which can be written in a superposition form of Mathieu functions and modified Mathieu functions. The convergence of the solution is demonstrated, and the correctness is numerically validated via results from the finite element method (FEM). Subsequently, a systematic investigation is carried out to quantify the effect of the electrode size on the energy trapping phenomenon, i.e., the resonant frequency and mode shape, which reveals that the ring electrode has a great influence on the work performance of resonators. With the increase of the electrode inertia, i.e., the radius and mass ratio, new trapped modes emergence with the vibration mainly focused on the plate with partial electrodes. Besides, owing to the anisotropy, degenerated trapped modes have different resonant frequencies and the frequency discrepancy between them will become smaller for higher modes. Finally, the influence of multiple ring electrodes is investigated, and the qualitative analysis and quantitative results demonstrate that multiple ring electrodes will lead to a more uniform mass sensitivity compared with a single ring electrode. The outcome is widely applicable, which can provide theoretical guidance for the structural design and manufacturing of quartz resonators, as well as a thorough interpretation about the underlying physical mechanism.Transcranial focused ultrasound is a novel noninvasive therapeutic modality for glioblastoma and other disorders of the brain. However, because the phase aberrations caused by the skull need to be corrected with computed tomography (CT) images, the transcranial transducer is tightly fixed on the patient's head to avoid any variation in the relative position, and the focus shifting relies mainly on the capacity for electronic beam steering. Due to the presence of grating lobes and the rapid degradation of the focus quality with increasing focus-shifting distance, transcranial focus-shifting sonication may damage healthy brain tissue unintentionally. To reduce the risks associated with transcranial focused ultrasound therapy, linear frequency-modulated (FM) excitation is proposed. The k-space corrected pseudospectral time domain (PSTD) and acoustic holography approach based on the Rayleigh integral are combined to calculate the distribution of the deposited acoustic power. The corresponding simulation was performed with axial/lateral focus shifting at different distances. The distributions of the deposited acoustic power show that linear FM excitation can effectively suppress undesired prefocal grating lobes without compromising focus quality.Interactive segmentation has recently been explored to effectively and efficiently harvest high-quality segmentation masks by iteratively incorporating user hints. While iterative in nature, most existing interactive segmentation methods tend to ignore the dynamics of successive interactions and take each interaction independently. We here propose to model iterative interactive image segmentation with a Markov decision process (MDP) and solve it with reinforcement learning (RL) where each voxel is treated as an agent. Considering the large exploration space for voxel-wise prediction and the dependence among neighboring voxels for the segmentation tasks, multi-agent reinforcement learning is adopted, where the voxel-level policy is shared among agents. Considering that boundary voxels are more important for segmentation, we further introduce a boundary-aware reward, which consists of a global reward in the form of relative cross-entropy gain, to update the policy in a constrained direction, and a boundary reward in the form of relative weight, to emphasize the correctness of boundary predictions. To combine the advantages of different types of interactions, i.e., simple and efficient for point-clicking, and stable and robust for scribbles, we propose a supervoxel-clicking based interaction design. Experimental results on four benchmark datasets have shown that the proposed method significantly outperforms the state-of-the-arts, with the advantage of fewer interactions, higher accuracy, and enhanced robustness.Capturing the 'mutual gaze' of people is essential for understanding and interpreting the social interactions between them. To this end, this paper addresses the problem of detecting people Looking At Each Other (LAEO) in video sequences. For this purpose, we propose LAEO-Net++, a new deep CNN for determining LAEO in videos. In contrast to previous works, LAEO-Net++ takes spatio-temporal tracks as input and reasons about the whole track. It consists of three branches, one for each character's tracked head and one for their relative position. Moreover, we introduce two new LAEO datasets UCO-LAEO and AVA-LAEO. A thorough experimental evaluation demonstrates the ability of LAEO-Net++ to successfully determine if two people are LAEO and the temporal window where it happens. Our model achieves state-of-the-art results on the existing TVHID-LAEO video dataset, significantly outperforming previous approaches. Finally, we apply LAEO-Net++ to a social network, where we automatically infer the social relationship between pairs of people based on the frequency and duration that they LAEO, and show that LAEO can be a useful tool for guided search of human interactions in videos.We present the lifted proximal operator machine (LPOM) to train fully-connected feed-forward neural networks. LPOM represents the activation function as an equivalent proximal operator and adds the proximal operators to the objective function of a network as penalties. LPOM is block multi-convex in all layer-wise weights and activations. This allows us to develop a new block coordinate descent (BCD) method with convergence guarantee to solve it. Due to the novel formulation and solving method, LPOM only uses the activation function itself and does not require any gradient steps. Thus it avoids the gradient vanishing or exploding issues, which are often blamed in gradient-based methods. Also, it can handle various non-decreasing Lipschitz continuous activation functions. Additionally, LPOM is almost as memory-efficient as stochastic gradient descent and its parameter tuning is relatively easy. We further implement and analyze the parallel solution of LPOM. We first propose a general asynchronous-parallel BCD method with convergence guarantee. Then we use it to solve LPOM, resulting in asynchronous-parallel LPOM. For faster speed, we develop the synchronous-parallel LPOM. We validate the advantages of LPOM on various network architectures and datasets. We also apply synchronous-parallel LPOM to autoencoder training and demonstrate its fast convergence and superior performance.

Autoři článku: Pachecoaarup4612 (Strauss Ahmad)