Krogduggan7285

Z Iurium Wiki

We assess its application using a data set that contains over 10,000 manually annotated snore events from 9 subjects, and show that when using the American Academy of Sleep Medicine Manual standard, two sleep technologists can achieve an F1-score of 0.88 when identifying the presence of snore events. In addition, we drafted rules for marking snore boundaries and showed that one sleep technologist can achieve F1-score of 0.94 at the same tasks. Finally, we compared this protocol against the protocol that is used to evaluate sleep spindle detection and highlighted the differences.Electroencephalogram (EEG) based seizure types classification has not been addressed well, compared to seizure detection, which is very important for the diagnosis and prognosis of epileptic patients. The minuscule changes reflected in EEG signals among different seizure types make such tasks more challenging. Therefore, in this work, underlying features in EEG have been explored by decomposing signals into multiple subcomponents which have been further used to generate 2D input images for deep learning (DL) pipeline. The Hilbert vibration decomposition (HVD) has been employed for decomposing the EEG signals by preserving phase information. Next, 2D images have been generated considering the first three subcomponents having high energy by involving continuous wavelet transform and converting them into 2D images for DL inputs. For classification, a hybrid DL pipeline has been constructed by combining the convolution neural network (CNN) followed by long short-term memory (LSTM) for efficient extraction of spatial and time sequence information. Experimental validation has been conducted by classifying five types of seizures and seizure-free, collected from the Temple University EEG dataset (TUH v1.5.2). The proposed method has achieved the highest classification accuracy up to 99% along with an F1-score of 99%. Further analysis shows that the HVD-based decomposition and hybrid DL model can efficiently extract in-depth features while classifying different types of seizures. In a comparative study, the proposed idea demonstrates its superiority by displaying the uppermost performance.Band selection (BS) reduces effectively the spectral dimension of a hyperspectral image (HSI) by selecting relatively few representative bands, which allows efficient processing in subsequent tasks. Existing unsupervised BS methods based on subspace clustering are built on matrix-based models, where each band is reshaped as a vector. They encode the correlation of data only in the spectral mode (dimension) and neglect strong correlations between different modes, i.e., spatial modes and spectral mode. Another issue is that the subspace representation of bands is performed in the raw data space, where the dimension is often excessively high, resulting in a less efficient and less robust performance. To address these issues, in this article, we propose a tensor-based subspace clustering model for hyperspectral BS. Our model is developed on the well-known Tucker decomposition. The three factor matrices and a core tensor in our model encode jointly the multimode correlations of HSI, avoiding effectively to destroy the tensor structure and information loss. learn more In addition, we propose well-motivated heterogeneous regularizations (HRs) on the factor matrices by taking into account the important local and global properties of HSI along three dimensions, which facilitates the learning of the intrinsic cluster structure of bands in the low-dimensional subspaces. Instead of learning the correlations of bands in the original domain, a common way for the matrix-based models, our model learns naturally the band correlations in a low-dimensional latent feature space, which is derived by the projections of two factor matrices associated with spatial dimensions, leading to a computationally efficient model. More importantly, the latent feature space is learned in a unified framework. We also develop an efficient algorithm to solve the resulting model. Experimental results on benchmark datasets demonstrate that our model yields improved performance compared to the state-of-the-art.Nonnegative matrix factorization (NMF) is a widely used data analysis technique and has yielded impressive results in many real-world tasks. Generally, existing NMF methods represent each sample with several centroids and find the optimal centroids by minimizing the sum of the residual errors. However, outliers deviating from the normal data distribution may have large residues and then dominate the objective value. In this study, an entropy minimizing matrix factorization (EMMF) framework is developed to tackle the above problem. Considering that outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization, which minimizes the entropy of the residue distribution and allows a few samples to have large errors. In this way, the outliers do not affect the approximation of normal samples. Multiplicative updating rules for EMMF are derived, and the convergence is proven theoretically. In addition, a Graph regularized version of EMMF (G-EMMF) is also presented, which uses a data graph to capture the data relationship. Clustering results on various synthetic and real-world datasets demonstrate the advantages of the proposed models, and the effectiveness is also verified through the comparison with state-of-the-art methods.The problem of neural adaptive distributed formation control is investigated for quadrotor multiple unmanned aerial vehicles (UAVs) subject to unmodeled dynamics and disturbance. The quadrotor UAV system is divided into two parts the position subsystem and the attitude subsystem. A virtual position controller based on backstepping is designed to address the coupling constraints and generate two command signals for the attitude subsystem. By establishing the communication mechanism between the UAVs and the virtual leader, a distributed formation scheme, which uses the UAVs' local information and makes each UAV update its position and velocity according to the information of neighboring UAVs, is proposed to form the required formation flight. By designing a neural adaptive sliding mode controller (SMC) for multi-UAVs, the compound uncertainties (including nonlinearities, unmodeled dynamics, and external disturbances) are compensated for to guarantee good tracking performance. The Lyapunov theory is used to prove that the tracking error of each UAV converges to an adjustable neighborhood of zero. Finally, the simulation results demonstrate the effectiveness of the proposed scheme.Due to the complexity of the ocean environment, an autonomous underwater vehicle (AUV) is disturbed by obstacles when performing tasks. Therefore, the research on underwater obstacle detection and avoidance is particularly important. Based on the images collected by a forward-looking sonar on an AUV, this article proposes an obstacle detection and avoidance algorithm. First, a deep learning-based obstacle candidate area detection algorithm is developed. This algorithm uses the You Only Look Once (YOLO) v3 network to determine obstacle candidate areas in a sonar image. Then, in the determined obstacle candidate areas, the obstacle detection algorithm based on the improved threshold segmentation algorithm is used to detect obstacles accurately. Finally, using the obstacle detection results obtained from the sonar images, an obstacle avoidance algorithm based on deep reinforcement learning (DRL) is developed to plan a reasonable obstacle avoidance path of an AUV. Experimental results show that the proposed algorithms improve obstacle detection accuracy and processing speed of sonar images. At the same time, the proposed algorithms ensure AUV navigation safety in a complex obstacle environment.With the introduction of neuron coverage as a testing criterion for deep neural networks (DNNs), covering more neurons to detect more internal logic of DNNs became the main goal of many research studies. While some works had made progress, some new challenges for testing methods based on neuron coverage had been proposed, mainly as establishing better neuron selection and activation strategies influenced not only obtaining higher neuron coverage, but also more testing efficiency, validating testing results automatically, labeling generated test cases to extricate manual work, and so on. In this article, we put forward Test4Deep, an effective white-box testing DNN approach based on neuron coverage. It is based on a differential testing framework to automatically verify inconsistent DNNs' behavior. We designed a strategy that can track inactive neurons and constantly triggered them in each iteration to maximize neuron coverage. Furthermore, we devised an optimization function that guided the DNN under testing to deviate predictions between the original input and generated test data and dominated unobservable generation perturbations to avoid manually checking test oracles. We conducted comparative experiments with two state-of-the-art white-box testing methods DLFuzz and DeepXplore. Empirical results on three popular datasets with nine DNNs demonstrated that compared to DLFuzz and DeepXplore, Test4Deep, on average, exceeded by 32.87% and 35.69% in neuron coverage, while reducing 58.37% and 53.24% testing time, respectively. In the meantime, Test4Deep also produced 58.37% and 53.24% more test cases with 23.81% and 98.40% fewer perturbations. Even compared with the two highest neuron coverage strategies of DLFuzz, Test4Deep still enhanced neuron coverage by 4.34% and 23.23% and achieved 94.48% and 85.67% higher generation time efficiency. Furthermore, Test4Deep could improve the accuracy and robustness of DNNs by merging generated test cases and retraining.The real-world recommender system needs to be regularly retrained to keep with the new data. In this work, we consider how to efficiently retrain graph convolution network (GCN)-based recommender models that are state-of-the-art techniques for the collaborative recommendation. To pursue high efficiency, we set the target as using only new data for model updating, meanwhile not sacrificing the recommendation accuracy compared with full model retraining. This is nontrivial to achieve since the interaction data participates in both the graph structure for model construction and the loss function for model learning, whereas the old graph structure is not allowed to use in model updating. Toward the goal, we propose a causal incremental graph convolution (IGC) approach, which consists of two new operators named IGC and colliding effect distillation (CED) to estimate the output of full graph convolution. In particular, we devise simple and effective modules for IGC to ingeniously combine the old representations and the incremental graph and effectively fuse the long- and short-term preference signals. CED aims to avoid the out-of-date issue of inactive nodes that are not in the incremental graph, which connects the new data with inactive nodes through causal inference. In particular, CED estimates the causal effect of new data on the representation of inactive nodes through the control of their collider. Extensive experiments on three real-world datasets demonstrate both accuracy gains and significant speed-ups over the existing retraining mechanism.

Autoři článku: Krogduggan7285 (Hansen Christoffersen)