Newellwest8593

Z Iurium Wiki

Verze z 20. 9. 2024, 20:29, kterou vytvořil Newellwest8593 (diskuse | příspěvky) (Založena nová stránka s textem „Moreover, no classification accuracy drop is observed. The proposed method does not rely on outlier/background data, hyperparameter tuning, temperature cal…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Moreover, no classification accuracy drop is observed. The proposed method does not rely on outlier/background data, hyperparameter tuning, temperature calibration, feature extraction, metric learning, adversarial training, ensemble procedures, or generative models. Our experiments showed that IsoMax loss works as a seamless SoftMax loss drop-in replacement that significantly improves neural networks' OOD detection performance. Hence, it may be used as a baseline OOD detection approach to be combined with current or future OOD detection techniques to achieve even higher results.This article presents the adaptive tracking control scheme of nonlinear multiagent systems under a directed graph and state constraints. In this article, the integral barrier Lyapunov functionals (iBLFs) are introduced to overcome the conservative limitation of the barrier Lyapunov function with error variables, relax the feasibility conditions, and simultaneously solve state constrained and coupling terms of the communication errors between agents. An adaptive distributed controller was designed based on iBLF and backstepping method, and iBLF was differentiated by means of the integral mean value theorem. At the same time, the properties of neural network are used to approximate the unknown terms, and the stability of the systems is proven by the Lyapunov stability theory. This scheme can not only ensure that the output of all the followers meets the output trajectory of the leader but also make the state variables not violate the constraint bounds, and all the closed-loop signals are bounded. Finally, the efficiency of the proposed controller is revealed.The Cox proportional hazard model has been widely applied to cancer prognosis prediction. Nowadays, multi-modal data, such as histopathological images and gene data, have advanced this field by providing histologic phenotype and genotype information. However, how to efficiently fuse and select the complementary information of high-dimensional multi-modal data remains challenging for Cox model, as it generally does not equip with feature fusion/selection mechanism. Many previous studies typically perform feature fusion/selection in the original feature space before Cox modeling. JAK/stat pathway Alternatively, learning a latent shared feature space that is tailored for Cox model and simultaneously keeps sparsity is desirable. In addition, existing Cox-based models commonly pay little attention to the actual length of the observed time that may help to boost the model's performance. In this article, we propose a novel Cox-driven multi-constraint latent representation learning framework for prognosis analysis with multi-modal data. Specifically, for efficient feature fusion, a multi-modal latent space is learned via a bi-mapping approach under ranking and regression constraints. The ranking constraint utilizes the log-partial likelihood of Cox model to induce learning discriminative representations in a task-oriented manner. Meanwhile, the representations also benefit from regression constraint, which imposes the supervision of specific survival time on representation learning. To improve generalization and alleviate overfitting, we further introduce similarity and sparsity constraints to encourage extra consistency and sparseness. Extensive experiments on three datasets acquired from The Cancer Genome Atlas (TCGA) demonstrate that the proposed method is superior to state-of-the-art Cox-based models.Bioinspired spiking neural networks (SNNs), operating with asynchronous binary signals (or spikes) distributed over time, can potentially lead to greater computational efficiency on event-driven hardware. The state-of-the-art SNNs suffer from high inference latency, resulting from inefficient input encoding and suboptimal settings of the neuron parameters (firing threshold and membrane leak). We propose DIET-SNN, a low-latency deep spiking network trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold of each layer are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The input layer directly processes the analog pixel values of an image without converting it to spike train. The first convolutional layer converts analog inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak selectively attenuates the membrane potential, which increases activation sparsity in the network. The reduced latency combined with high activation sparsity provides massive improvements in computational efficiency. We evaluate DIET-SNN on image classification tasks from CIFAR and ImageNet datasets on VGG and ResNet architectures. We achieve top-1 accuracy of 69% with five timesteps (inference latency) on the ImageNet dataset with 12x less compute energy than an equivalent standard artificial neural network (ANN). In addition, DIET-SNN performs 20-500x faster inference compared to other state-of-the-art SNN models.Bayesian non-negative matrix factorization (BNMF) has been widely used in different applications. In this article, we propose a novel BNMF technique dedicated to semibounded data where each entry of the observed matrix is supposed to follow an Inverted Beta distribution. The model has two parameter matrices with the same size as the observation matrix which we factorize into a product of excitation and basis matrices. Entries of the corresponding basis and excitation matrices follow a Gamma prior. To estimate the parameters of the model, variational Bayesian inference is used. A lower bound approximation for the objective function is used to find an analytically tractable solution for the model. An online extension of the algorithm is also proposed for more scalability and to adapt to streaming data. The model is evaluated on five different applications part-based decomposition, collaborative filtering, market basket analysis, transactions prediction and items classification, topic mining, and graph embedding on biomedical networks.Anomaly detection on attributed graphs has received increasing research attention lately due to the broad applications in various high-impact domains, such as cybersecurity, finance, and healthcare. Heretofore, most of the existing efforts are predominately performed in an unsupervised manner due to the expensive cost of acquiring anomaly labels, especially for newly formed domains. How to leverage the invaluable auxiliary information from a labeled attributed graph to facilitate the anomaly detection in the unlabeled attributed graph is seldom investigated. In this study, we aim to tackle the problem of cross-domain graph anomaly detection with domain adaptation. However, this task remains nontrivial mainly due to 1) the data heterogeneity including both the topological structure and nodal attributes in an attributed graph and 2) the complexity of capturing both invariant and specific anomalies on the target domain graph. To tackle these challenges, we propose a novel framework Commander for cross-domain anomaly detection on attributed graphs. Specifically, Commander first compresses the two attributed graphs from different domains to low-dimensional space via a graph attentive encoder. In addition, we utilize a domain discriminator and an anomaly classifier to detect anomalies that appear across networks from different domains. In order to further detect the anomalies that merely appear in the target network, we develop an attribute decoder to provide additional signals for assessing node abnormality. Extensive experiments on various real-world cross-domain graph datasets demonstrate the efficacy of our approach.This article considers distributed optimization by a group of agents over an undirected network. The objective is to minimize the sum of a twice differentiable convex function and two possibly nonsmooth convex functions, one of which is composed of a bounded linear operator. A novel distributed primal-dual fixed point algorithm is proposed based on an adapted metric method, which exploits the second-order information of the differentiable convex function. Furthermore, by incorporating a randomized coordinate activation mechanism, we propose a randomized asynchronous iterative distributed algorithm that allows each agent to randomly and independently decide whether to perform an update or remain unchanged at each iteration, and thus alleviates the communication cost. Moreover, the proposed algorithms adopt nonidentical stepsizes to endow each agent with more independence. Numerical simulation results substantiate the feasibility of the proposed algorithms and the correctness of the theoretical results.Professional roles for data visualization designers are growing in popularity, and interest in relationships between the academic research and professional practice communities is gaining traction. However, despite the potential for knowledge sharing between these communities, we have little understanding of the ways in which practitioners design in real-world, professional settings. Inquiry in numerous design disciplines indicates that practitioners approach complex situations in ways that are fundamentally different from those of researchers. In this work, I take a practice-led approach to understanding visualization design practice on its own terms. Twenty data visualization practitioners were interviewed and asked about their design process, including the steps they take, how they make decisions, and the methods they use. Findings suggest that practitioners do not follow highly systematic processes, but instead rely on situated forms of knowing and acting in which they draw from precedent and use methods and principles that are determined appropriate in the moment. These findings have implications for how visualization researchers understand and engage with practitioners, and how educators approach the training of future data visualization designers.The efficiency of warehouses is vital to e-commerce. Fast order processing at the warehouses ensures timely deliveries and improves customer satisfaction. However, monitoring, analyzing, and manipulating order processing in the warehouses in real time are challenging for traditional methods due to the sheer volume of incoming orders, the fuzzy definition of delayed order patterns, and the complex decision-making of order handling priorities. In this paper, we adopt a data-driven approach and propose OrderMonitor, a visual analytics system that assists warehouse managers in analyzing and improving order processing efficiency in real time based on streaming warehouse event data. Specifically, the order processing pipeline is visualized with a novel pipeline design based on the sedimentation metaphor to facilitate real-time order monitoring and suggest potentially abnormal orders. We also design a novel visualization that depicts order timelines based on the Gantt charts and Marey's graphs. Such a visualization helps the managers gain insights into the performance of order processing and find major blockers for delayed orders. Furthermore, an evaluating view is provided to assist users in inspecting order details and assigning priorities to improve the processing performance. The effectiveness of OrderMonitor is evaluated with two case studies on a real-world warehouse dataset.

Autoři článku: Newellwest8593 (Oneill Jespersen)