Calderonrich7775

Z Iurium Wiki

Verze z 18. 10. 2024, 03:19, kterou vytvořil Calderonrich7775 (diskuse | příspěvky) (Založena nová stránka s textem „In addition, the gradient detached fusion (GDF) module is incorporated to produce an ensemble result with multiscale features via effective feature fusion.…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

In addition, the gradient detached fusion (GDF) module is incorporated to produce an ensemble result with multiscale features via effective feature fusion. Extensive experiments on four challenging fine-grained datasets show that, with neglectable parameter increase, the proposed HSD framework and the GDF module both bring significant performance gains over different backbones, which also achieves state-of-the-art classification performance.Communication and computation resources are normally limited in remote/networked control systems, and thus, saving either of them could substantially contribute to cost reduction and life-span increasing as well as reliability enhancement for such systems. This article investigates the event-triggered control method to save both communication and computation resources for a class of uncertain nonlinear systems in the presence of actuator failures and full-state constraints. By introducing the triggering mechanisms for actuation updating and parameter adaptation, and with the aid of the unified constraining functions, a neuroadaptive and fault-tolerant event-triggered control scheme is developed with several salient features 1) online computation and communication resources are substantially reduced due to the utilization of unsynchronized (uncorrelated) event-triggering pace for control updating and parameter adaptation; 2) systems with and without constraints can be addressed uniformly without involving feasibility conditions on virtual controllers; and 3) the output tracking error converges to a prescribed precision region in the presence of actuation faults and state constraints. Both theoretical analysis and numerical simulation verify the benefits and efficiency of the proposed method.This letter summarizes and proves the concept of bounded-input bounded-state (BIBS) stability for weight convergence of a broad family of in-parameter-linear nonlinear neural architectures (IPLNAs) as it generally applies to a broad family of incremental gradient learning algorithms. A practical BIBS convergence condition results from the derived proofs for every individual learning point or batches for real-time applications.Unsupervised domain adaptation (UDA) has attracted increasing attention in recent years, which adapts classifiers to an unlabeled target domain by exploiting a labeled source domain. To reduce the discrepancy between source and target domains, adversarial learning methods are typically selected to seek domain-invariant representations by confusing the domain discriminator. However, classifiers may not be well adapted to such a domain-invariant representation space, as the sample- and class-level data structures could be distorted during adversarial learning. In this article, we propose a novel transferable feature learning approach on graphs (TFLG) for unsupervised adversarial domain adaptation (DA), which jointly incorporates sample- and class-level structure information across two domains. TFLG first constructs graphs for minibatch samples and identifies the classwise correspondence across domains. A novel cross-domain graph convolutional operation is designed to jointly align the sample- and class-level structures in two domains. Moreover, a memory bank is designed to further exploit the class-level information. Extensive experiments on benchmark datasets demonstrate the effectiveness of our approach compared to the state-of-the-art UDA methods.Vision-and-language navigation (VLN) is a challenging task that requires an agent to navigate in real-world environments by understanding natural language instructions and visual information received in real time. Prior works have implemented VLN tasks on continuous environments or physical robots, all of which use a fixed-camera configuration due to the limitations of datasets, such as 1.5-m height, 90° horizontal field of view (HFOV), and so on. However, real-life robots with different purposes have multiple camera configurations, and the huge gap in visual information makes it difficult to directly transfer the learned navigation skills between various robots. In this brief, we propose a visual perception generalization strategy based on meta-learning, which enables the agent to fast adapt to a new camera configuration. In the training phase, we first locate the generalization problem to the visual perception module and then compare two meta-learning algorithms for better generalization in seen and unseen environments. One of them uses the model-agnostic meta-learning (MAML) algorithm that requires few-shot adaptation, and the other refers to a metric-based meta-learning method with a feature-wise affine transformation (AT) layer. The experimental results on the VLN-CE dataset demonstrate that our strategy successfully adapts the learned navigation skills to new camera configurations, and the two algorithms show their advantages in seen and unseen environments respectively.G protein-coupled receptors (GPCRs) account for about 40% to 50% of drug targets. MCC950 mouse Many human diseases are related to G protein coupled receptors. Accurate prediction of GPCR interaction is not only essential to understand its structural role, but also helps design more effective drugs. At present, the prediction of GPCR interaction mainly uses machine learning methods. Machine learning methods generally require a large number of independent and identically distributed samples to achieve good results. However, the number of available GPCR samples that have been marked is scarce. Transfer learning has a strong advantage in dealing with such small sample problems. Therefore, this paper proposes a transfer learning method based on sample similarity, using XGBoost as a weak classifier and using the TrAdaBoost algorithm based on JS divergence for data weight initialization to transfer samples to construct a data set. After that, the deep neural network based on the attention mechanism is used for model training. The existing GPCR is used for prediction. In short-distance contact prediction,The accuracy of our method is 0.26 higher than similar methods.Sequence alignment is an essential step in computational genomics. More accurate and efficient sequence pre-alignment methods that run before conducting expensive computation for final verification are still urgently needed. In this article, we propose a more accurate and efficient pre-alignment algorithm for sequence alignment, called DiagAF. Firstly, DiagAF uses a new lower bound of edit distance based on shift hamming masks. The new lower bound makes use of fewer shift hamming masks comparing with state-of-art algorithms such as SHD and MAGNET. Moreover, it takes account the information of edit distance path exchanging on shift hamming masks. Secondly, DiagAF can deal with alignments of sequence pairs with not equal length, rather than state-of-art methods just for equal length. Thirdly, DiagAF can align sequences with early termination for true alignments. In the experiment, we compared DiagAF with state-of-art methods. link2 DiagAF can achieve a much smaller error rate than them, meanwhile use less time than them. We believe that DiagAF algorithm can further improve the performance of state-of-art sequence alignment softwares. The source codes of DiagAF can be downloaded from web site https//github.com/BioLab-cz/DiagAF.Data visualizations have been increasingly used in oral presentations to communicate data patterns to the general public. Clear verbal introductions of visualizations to explain how to interpret the visually encoded information are essential to convey the takeaways and avoid misunderstandings. We contribute a series of studies to investigate how to effectively introduce visualizations to the audience with varying degrees of visualization literacy. We begin with understanding how people are introducing visualizations. We crowdsource 110 introductions of visualizations and categorize them based on their content and structures. From these crowdsourced introductions, we identify different introduction strategies and generate a set of introductions for evaluation. We conduct experiments to systematically compare the effectiveness of different introduction strategies across four visualizations with 1,080 participants. We find that introductions explaining visual encodings with concrete examples are the most effective. Our study provides both qualitative and quantitative insights into how to construct effective verbal introductions of visualizations in presentations, inspiring further research in data storytelling.We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.Fine-grained visual recognition is to classify objects with visually similar appearances into subcategories, which has made great progress with the development of deep CNNs. However, handling subtle differences between different subcategories still remains a challenge. In this paper, we propose to solve this issue in one unified framework from two aspects, i.e., constructing feature-level interrelationships, and capturing part-level discriminative features. This framework, namely PArt-guided Relational Transformers (PART), is proposed to learn the discriminative part features with an automatic part discovery module, and to explore the intrinsic correlations with a feature transformation module by adapting the Transformer models from the field of natural language processing. The part discovery module efficiently discovers the discriminative regions which are highly-corresponded to the gradient descent procedure. link3 Then the second feature transformation module builds correlations within the global embedding and multiple part embedding, enhancing spatial interactions among semantic pixels. Moreover, our proposed approach does not rely on additional part branches in the inference time and reaches state-of-the-art performance on 3 widely-used fine-grained object recognition benchmarks. Experimental results and explainable visualizations demonstrate the effectiveness of our proposed approach.

Autoři článku: Calderonrich7775 (Mathiasen Gustafsson)