Boothpontoppidan5665

Z Iurium Wiki

Verze z 19. 9. 2024, 20:41, kterou vytvořil Boothpontoppidan5665 (diskuse | příspěvky) (Založena nová stránka s textem „Recently, many convolutional neural network (CNN) methods have been designed for hyperspectral image (HSI) classification since CNNs are able to produce go…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Recently, many convolutional neural network (CNN) methods have been designed for hyperspectral image (HSI) classification since CNNs are able to produce good representations of data, which greatly benefits from a huge number of parameters. However, solving such a high-dimensional optimization problem often requires a large number of training samples in order to avoid overfitting. In addition, it is a typical nonconvex problem affected by many local minima and flat regions. To address these problems, in this article, we introduce the naive Gabor networks or Gabor-Nets that, for the first time in the literature, design and learn CNN kernels strictly in the form of Gabor filters, aiming to reduce the number of involved parameters and constrain the solution space and, hence, improve the performances of CNNs. Specifically, we develop an innovative phase-induced Gabor kernel, which is trickily designed to perform the Gabor feature learning via a linear combination of local low-frequency and high-frequency components of data controlled by the kernel phase. With the phase-induced Gabor kernel, the proposed Gabor-Nets gains the ability to automatically adapt to the local harmonic characteristics of the HSI data and, thus, yields more representative harmonic features. Also, this kernel can fulfill the traditional complex-valued Gabor filtering in a real-valued manner, hence making Gabor-Nets easily perform in a usual CNN thread. We evaluated our newly developed Gabor-Nets on three well-known HSIs, suggesting that our proposed Gabor-Nets can significantly improve the performance of CNNs, particularly with a small training set.In this article, we propose an alternating directional 3-D quasi-recurrent neural network for hyperspectral image (HSI) denoising, which can effectively embed the domain knowledge--structural spatiospectral correlation and global correlation along spectrum (GCS). Specifically, 3-D convolution is utilized to extract structural spatiospectral correlation in an HSI, while a quasi-recurrent pooling function is employed to capture the GCS. Moreover, the alternating directional structure is introduced to eliminate the causal dependence with no additional computation cost. The proposed model is capable of modeling spatiospectral dependence while preserving the flexibility toward HSIs with an arbitrary number of bands. Extensive experiments on HSI denoising demonstrate significant improvement over the state-of-the-art under various noise settings, in terms of both restoration accuracy and computation time. Our code is available at https//github.com/Vandermode/QRNN3D.Deep neural networks (DNNs) thrive in recent years, wherein batch normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the huge reduction and elementwise operations that are hard to be executed in parallel, which heavily reduces the training speed. To address this issue, in this article, we propose a methodology to alleviate the BN's cost by using only a few sampled or generated data for mean and variance estimation at each iteration. The key challenge to reach this goal is how to achieve a satisfactory balance between normalization effectiveness and execution efficiency. We identify that the effectiveness expects less data correlation in sampling while the efficiency expects more regular execution patterns. To this end, we design two categories of approach sampling or creating a few uncorrelated data for statistics' estimation with certain strategy constraints. The former includes ``batch sampling (BS) that randomly selects a few samples from each batch and ``feature sampling (FS) that randomly selects a small patch from each feature map of all samples, and the latter is ``virtual data set normalization (VDN) that generates a few synthetic random samples to directly create uncorrelated data for statistics' estimation. Accordingly, multiway strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime. The proposed methods are comprehensively evaluated on various DNN models, where the loss of model accuracy and the convergence rate are negligible. Without the support of any specialized libraries, 1.98x BN layer acceleration and 23.2% overall training speedup can be practically achieved on modern GPUs. Furthermore, our methods demonstrate powerful performance when solving the well-known ``micro-BN problem in the case of a tiny batch size. This article provides a promising solution for the efficient training of high-performance DNNs.This article investigates the problem of robust exponential stability of fuzzy switched memristive inertial neural networks (FSMINNs) with time-varying delays on mode-dependent destabilizing impulsive control protocol. The memristive model presented here is treated as a switched system rather than employing the theory of differential inclusion and set-value map. To optimize the robust exponentially stable process and reduce the cost of time, hybrid mode-dependent destabilizing impulsive and adaptive feedback controllers are simultaneously applied to stabilize FSMINNs. In the new model, the multiple impulsive effects exist between two switched modes, and the multiple switched effects may also occur between two impulsive instants. Based on switched analysis techniques, the Takagi-Sugeno (T-S) fuzzy method, and the average dwell time, extended robust exponential stability conditions are derived. Finally, simulation is provided to illustrate the effectiveness of the results.Concept drift refers to changes in the distribution of underlying data and is an inherent property of evolving data streams. Ensemble learning, with dynamic classifiers, has proved to be an efficient method of handling concept drift. However, the best way to create and maintain ensemble diversity with evolving streams is still a challenging problem. In contrast to estimating diversity via inputs, outputs, or classifier parameters, we propose a diversity measurement based on whether the ensemble members agree on the probability of a regional distribution change. In our method, estimations over regional distribution changes are used as instance weights. Constructing different region sets through different schemes will lead to different drift estimation results, thereby creating diversity. The classifiers that disagree the most are selected to maximize diversity. Accordingly, an instance-based ensemble learning algorithm, called the diverse instance-weighting ensemble (DiwE), is developed to address concept drift for data stream classification problems. Evaluations of various synthetic and real-world data stream benchmarks show the effectiveness and advantages of the proposed algorithm.The conventional subspace clustering method obtains explicit data representation that captures the global structure of data and clusters via the associated subspace. Blasticidin S However, due to the limitation of intrinsic linearity and fixed structure, the advantages of prior structure are limited. To address this problem, in this brief, we embed the structured graph learning with adaptive neighbors into the deep autoencoder networks such that an adaptive deep clustering approach, namely, autoencoder constrained clustering with adaptive neighbors (ACC_AN), is developed. The proposed method not only can adaptively investigate the nonlinear structure of data via a parameter-free graph built upon deep features but also can iteratively strengthen the correlations among the deep representations in the learning process. In addition, the local structure of raw data is preserved by minimizing the reconstruction error. Compared to the state-of-the-art works, ACC_AN is the first deep clustering method embedded with the adaptive structured graph learning to update the latent representation of data and structured deep graph simultaneously.Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications, where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on the existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this article, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNs, convolutional GNNs, graph autoencoders, and spatial-temporal GNNs. We further discuss the applications of GNNs across various domains and summarize the open-source codes, benchmark data sets, and model evaluation of GNNs. Finally, we propose potential research directions in this rapidly growing field.This article studies the stability in probability of probabilistic Boolean networks and stabilization in the probability of probabilistic Boolean control networks. To simulate more realistic cellular systems, the probability of stability/stabilization is not required to be a strict one. In this situation, the target state is indefinite to have a probability of transferring to itself. Thus, it is a challenging extension of the traditional probability-one problem, in which the self-transfer probability of the target state must be one. Some necessary and sufficient conditions are proposed via the semitensor product of matrices. Illustrative examples are also given to show the effectiveness of the derived results.Limb viscoelasticity is a critical factor used to regulate the interaction with the environment. It plays a key role in modelling human sensorimotor control, and can be used to assess the condition of healthy and neurologically affected individuals. This paper reports the estimation of hip joint viscoelasticity during voluntary force control using a novel device that applies a leg displacement without constraining the hip joint. The influence of hip angle, applied limb force and perturbation direction on the stiffness and viscosity values was studied in ten subjects. No difference was detected in the hip joint stiffness between the dominant and non-dominant legs, but a small dependency was observed on the perturbation direction. Both hip stiffness and viscosity increased monotonically with the applied force magnitude, with posture to being observed to have a slight influence. These results are in line with previous measurements carried out on upper limbs, and can be used as a baseline for lower limb movement simulation and further neuromechanical investigations.Recent research has demonstrated improved performance of a brain-computer interface (BCI) using fusion based approaches. This paper proposes a novel decision-making selector (DMS) to integrate classification decisions of different frequency recognition methods based on canonical correlation analysis (CCA) which were used in decoding steady state visual evoked potentials (SSVEPs). METHODS The DMS method selects a decision more likely to be correct from two methods namely as M1 and M2 by separating the M1-false and M2-false trials. To measure the uncertainty of each decision, feature vectors were extracted using the largest and second largest correlation coefficients corresponding to all the stimulus frequencies. The proposed method was evaluated by integrating all pairs of 7 CCA-based algorithms, including CCA, individual template-based CCA (ITCCA), multi-set CCA (MsetCCA), L1-regularized multi-way CCA (L1-MCCA), filter bank CCA (FBCCA), extended CCA (ECCA), and task-related component analysis (TRCA). MAIN RESULTS The experimental results obtained from a 40-target dataset of thirty-five subjects showed that the proposed DMS method was validated to obtain an enhanced performance by integrating the algorithms with close accuracies.

Autoři článku: Boothpontoppidan5665 (Lauesen Alston)