Farmerhoover9977

Z Iurium Wiki

In this article, we present a new approach instead of using only the number of times a feature has been selected, the approach considers how many times the features have been selected together by a feature selection algorithm. The proposal is based on constructing an undirected graph where the vertices are the features, and the edges count the number of times every pair of instances has been selected together. This graph is used to select the best subset of features, avoiding the redundancy introduced by the voting scheme. The proposal improves the results of the standard voting scheme in both ensembles of feature selectors and data division methods for scaling up feature selection.The multiplayer stochastic noncooperative tracking game (NTG) with conflicting target strategy and cooperative tracking game (CTG) with a common target strategy of the mean-field stochastic jump-diffusion (MFSJD) system with external disturbance is investigated in this study. Due to the mean (collective) behavior in the system dynamic and cost function, the designs of the NTG strategy and CTG strategy for target tracking of the MFSJD system are more difficult than the conventional stochastic system. By the proposed indirect method, the NTG and CTG strategy design problems are transformed into linear matrix inequalities (LMIs)-constrained multiobjective optimization problem (MOP) and LMIs-constrained single-objective optimization problem (SOP), respectively. The LMIs-constrained MOP could be solved effectively for all Nash equilibrium solutions of NTG at the Pareto front by the proposed LMIs-constrained multiobjective evolutionary algorithm (MOEA). Two simulation examples, including the share market allocation and network security strategies in cyber-social systems, are given to illustrate the design procedure and validate the effectiveness of the proposed LMI-constrained MOEA for all Nash equilibrium solutions of NTG strategies of the MFSJD system.The Dempster-Shafer (DS) belief theory constitutes a powerful framework for modeling and reasoning with a wide variety of uncertainties due to its greater expressiveness and flexibility. As in the Bayesian probability theory, the DS theoretic (DST) conditional plays a pivotal role in DST strategies for evidence updating and fusion. However, a major limitation in employing the DST framework in practical implementations is the absence of an efficient and feasible computational framework to overcome the prohibitive computational burden DST operations entail. The work in this article addresses the pressing need for efficient DST conditional computation via the novel computational model DS-Conditional-All. It requires significantly less time and space complexity for computing the Dempster's conditional and the Fagin-Halpern conditional, the two most widely utilized DST conditional strategies. It also provides deeper insight into the DST conditional itself, and thus acts as a valuable tool for visualizing and analyzing the conditional computation. We provide a thorough analysis and experimental validation of the utility, efficiency, and implementation of the proposed data structure and algorithms. A new computational library, which we refer to as DS-Conditional-One and DS-Conditional-All (DS-COCA), is developed and harnessed in the simulations.Spectral Doppler measurements are an important part of the standard echocardiographic examination. These measurements give insight into myocardial motion and blood flow providing clinicians with parameters for diagnostic decision making. Many of these measurements are performed automatically with high accuracy, increasing the efficiency of the diagnostic pipeline. However, full automation is not yet available because the user must manually select which measurement should be performed on each image. In this work, we develop a pipeline based on convolutional neural networks (CNNs) to automatically classify the measurement type from cardiac Doppler scans. We show how the multi-modal information in each spectral Doppler recording can be combined using a meta parameter post-processing mapping scheme and heatmaps to encode coordinate locations. Additionally, we experiment with several architectures to examine the tradeoff between accuracy, speed, and memory usage for resource-constrained environments. Finally, we propose a confidence metric using the values in the last fully connected layer of the network and show that our confidence metric can prevent many misclassifications. Our algorithm enables a fully automatic pipeline from acquisition to Doppler spectrum measurements. We achieve 96% accuracy on a test set drawn from separate clinical sites, indicating that the proposed method is suitable for clinical adoption.This article investigates the stability of the switched neural networks (SNNs) with a time-varying delay. To effectively guarantee the stability of the considered system with unstable subsystems and reduce conservatism of the stability criteria, admissible edge-dependent average dwell time (AED-ADT) is first utilized to restrict switching signals for the continuous-time SNNs, and multiple Lyapunov-Kravosikii functionals (LKFs) combining relaxed integral inequalities are employed to develop two novel less-conservative stability conditions. Finally, the numeral examples clearly indicate that the proposed criteria can reduce conservatism and ensure the stability of continuous-time SNNs.Multiview learning has shown its superiority in visual classification compared with the single-view-based methods. Especially, due to the powerful representation capacity, the Gaussian process latent variable model (GPLVM)-based multiview approaches have achieved outstanding performances. However, most of them only follow the assumption that the shared latent variables can be generated from or projected to the multiple observations but fail to exploit the harmonization in the back constraint and adaptively learn a classifier according to these learned variables, which would result in performance degradation. To tackle these two issues, in this article, we propose a novel harmonization shared autoencoder GPLVM with a relaxed Hamming distance (HSAGP-RHD). Particularly, an autoencoder structure with the Gaussian process (GP) prior is first constructed to learn the shared latent variable for multiple views. To enforce the agreement among various views in the encoder, a harmonization constraint is embedded into the model by making consistency for the view-specific similarity. Furthermore, we also propose a novel discriminative prior, which is directly imposed on the latent variable to simultaneously learn the fused features and adaptive classifier in a unit model. In detail, the centroid matrix corresponding to the centroids of different categories is first obtained. A relaxed Hamming distance (RHD)-based measurement is subsequently presented to measure the similarity and dissimilarity between the latent variable and centroids, not only allowing us to get the closed-form solutions but also encouraging the points belonging to the same class to be close, while those belonging to different classes to be far. Due to this novel prior, the category of the out-of-sample is also allowed to be simply assigned in the testing phase. Experimental results conducted on three real-world data sets demonstrate the effectiveness of the proposed method compared with state-of-the-art approaches.Multiview subspace clustering has attracted an increasing amount of attention in recent years. However, most of the existing multiview subspace clustering methods assume linear relations between multiview data points when learning the affinity representation by means of the self-expression or fail to preserve the locality property of the original feature space in the learned affinity representation. To address the above issues, in this article, we propose a new multiview subspace clustering method termed smoothness regularized multiview subspace clustering with kernel learning (SMSCK). To capture the nonlinear relations between multiview data points, the proposed model maps the concatenated multiview observations into a high-dimensional kernel space, in which the linear relations reflect the nonlinear relations between multiview data points in the original space. In addition, to explicitly preserve the locality property of the original feature space in the learned affinity representation, the smoothness regularization is deployed in the subspace learning in the kernel space. Theoretical analysis has been provided to ensure that the optimal solution of the proposed model meets the grouping effect. The unique optimal solution of the proposed model can be obtained by an optimization strategy and the theoretical convergence analysis is also conducted. Extensive experiments are conducted on both image and document data sets, and the comparison results with state-of-the-art methods demonstrate the effectiveness of our method.With the rapid development of sensor technologies, multisensor signals are now readily available for health condition monitoring and remaining useful life (RUL) prediction. To fully utilize these signals for a better health condition assessment and RUL prediction, health indices are often constructed through various data fusion techniques. Nevertheless, most of the existing methods fuse signals linearly, which may not be sufficient to characterize the health status for RUL prediction. To address this issue and improve the predictability, this article proposes a novel nonlinear data fusion approach, namely, a shape-constrained neural data fusion network for health index construction. Ko143 nmr Especially, a neural network-based structure is employed, and a novel loss function is formulated by simultaneously considering the monotonicity and curvature of the constructed health index and its variability at the failure time. A tailored adaptive moment estimation algorithm (Adam) is proposed for model parameter estimation. The effectiveness of the proposed method is demonstrated and compared through a case study using the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) data set.In this article, a manifold learning algorithm based on straight-like geodesics and local coordinates is proposed, called SGLC-ML for short. The contribution and innovation of SGLC-ML lie in that; first, SGLC-ML divides the manifold data into a number of straight-like geodesics, instead of a number of local areas like many manifold learning algorithms do. Figuratively speaking, SGLC-ML covers manifold data set with a sparse net woven with threads (straight-like geodesics), while other manifold learning algorithms with a tight roof made of titles (local areas). Second, SGLC-ML maps all straight-like geodesics into straight lines of a low-dimensional Euclidean space. All these straight lines start from the same point and extend along the same coordinate axis. These straight lines are exactly the local coordinates of straight-like geodesics as described in the mathematical definition of the manifold. With the help of local coordinates, dimensionality reduction can be divided into two relatively simple processes calculation and alignment of local coordinates. However, many manifold learning algorithms seem to ignore the advantages of local coordinates. The experimental results between SGLC-ML and other state-of-the-art algorithms are presented to verify the good performance of SGLC-ML.

Autoři článku: Farmerhoover9977 (Hansen Hartvigsen)