Omargiles6681
Single-cell clustering is a crucial task of scRNA-seq analysis, which reveals the natural grouping of cells. However, due to the high noise and high dimension in scRNA-seq data, how to effectively and accurately identify cell types from a great quantity of cell mixtures is still a challenge. Considering this, in this paper, we propose a novel subspace clustering algorithm termed SLRRSC. This method is developed based on the low-rank representation model, and it aims to capture the global and local properties inherent in data. In order to make the LRR matrix describe the spatial relationship of samples more accurately, we introduce the manifold-based graph regularization and similarity constraint into the LRR-based method SLRRSC. The graph regularization can preserve the local geometric structure of the data in low-rank decomposition, so that the low-rank representation matrix contains more local structure information. By imposing similarity constraint on the low-rank matrix, the similarity information between sample pairs is further introduced into the SLRRSC model to improve the learning ability of low-rank method for global structure. At the same time, the similarity constraint makes the low-rank representation matrix symmetric, which makes it better interpretable in clustering application. We compare the effectiveness of the SLRRSC algorithm with other single-cell clustering methods on simulated data and real single-cell datasets. The results show that this method can obtain more accurate sample similarity matrix and effectively solve the problem of cell type recognition.Recently, electroencephalography (EEG) signals have shown great potential for emotion recognition. Nevertheless, multichannel EEG recordings lead to redundant data, computational burden, and hardware complexity. Hence, efficient channel selection, especially single-channel selection, is vital. For this purpose, a technique termed brain rhythm sequencing (BRS) that interprets EEG based on a dominant brain rhythm having the maximum instantaneous power at each 0.2 s timestamp has been proposed. Then, dynamic time warping (DTW) is used for rhythm sequence classification through the similarity measure. After evaluating the rhythm sequences for the emotion recognition task, the representative channel that produces impressive accuracy can be found, which realizes single-channel selection accordingly. In addition, the appropriate time segment for emotion recognition is estimated during the assessments. The results from the music emotion recognition (MER) experiment and three emotional datasets (SEED, DEAP, and MAHNOB) indicate that the classification accuracies achieve 70-82% by single-channel data with a 10 s time length. Such performances are remarkable when considering minimum data sources as the primary concerns. Furthermore, the individual characteristics in emotion recognition are investigated based on the channels and times found. Therefore, this study provides a novel method to solve single-channel selection for emotion recognition.Echo state networks (ESNs) are a special type of recurrent neural networks (RNNs), in which the input and recurrent connections are traditionally generated randomly, and only the output weights are trained. Despite the recent success of ESNs in various tasks of audio, image, and radar recognition, we postulate that a purely random initialization is not the ideal way of initializing ESNs. The aim of this work is to propose an unsupervised initialization of the input connections using the K-means algorithm on the training data. We show that for a large variety of datasets, this initialization performs equivalently or superior than a randomly initialized ESN while needing significantly less reservoir neurons. Furthermore, we discuss that this approach provides the opportunity to estimate a suitable size of the reservoir based on prior knowledge about the data.Transform-domain least mean squares (TDLMS) adaptive filters encompass the class of learning algorithms where the input data are subjected to a data-independent unitary transform followed by a power normalization stage as preprocessing steps. Because conventional transformations are not data-dependent, this preconditioning procedure was shown theoretically to improve the convergence of the least mean squares (LMS) filter only for certain classes of input data. So, one can tailor the transformation to the class of data. BI-4020 ic50 However, in reality, if the class of input data is not known beforehand, it is difficult to decide which transformation to use. Thus, there is a need to devise a learning framework to obtain such a preconditioning transformation using input data prior to applying on the input data. It is hypothesized that the underlying topology of the data affects the selection of the transformation. With the input modeled as a weighted finite graph, our method, called preconditioning using graph (PrecoG), adaptively learns the desired transform by recursive estimation of the graph Laplacian matrix. We show the efficacy of the transform as a generalized split preconditioner on a linear system of equations and in Hebbian-LMS learning models. In terms of the improvement of the condition number after applying the transformation, PrecoG performs significantly better than the existing state-of-the-art techniques that involve unitary and nonunitary transforms.The nonuniform sampling (NUS) is a powerful approach to enable fast acquisition but requires sophisticated reconstruction algorithms. Faithful reconstruction from partially sampled exponentials is highly expected in general signal processing and many applications. Deep learning (DL) has shown astonishing potential in this field, but many existing problems, such as lack of robustness and explainability, greatly limit its applications. In this work, by combining the merits of the sparse model-based optimization method and data-driven DL, we propose a DL architecture for spectra reconstruction from undersampled data, called MoDern. It follows the iterative reconstruction in solving a sparse model to build the neural network, and we elaborately design a learnable soft-thresholding to adaptively eliminate the spectrum artifacts introduced by undersampling. Extensive results on both synthetic and biological data show that MoDern enables more robust, high-fidelity, and ultrafast reconstruction than the state-of-the-art methods. Remarkably, MoDern has a small number of network parameters and is trained on solely synthetic data while generalizing well to biological data in various scenarios. Furthermore, we extend it to an open-access and easy-to-use cloud computing platform (XCloud-MoDern), contributing a promising strategy for further development of biological applications.Recent weakly supervised semantic segmentation methods generate pseudolabels to recover the lost position information in weak labels for training the segmentation network. Unfortunately, those pseudolabels often contain mislabeled regions and inaccurate boundaries due to the incomplete recovery of position information. It turns out that the result of semantic segmentation becomes determinate to a certain degree. In this article, we decompose the position information into two components high-level semantic information and low-level physical information, and develop a componentwise approach to recover each component independently. Specifically, we propose a simple yet effective pseudolabels updating mechanism to iteratively correct mislabeled regions inside objects to precisely refine high-level semantic information. To reconstruct low-level physical information, we utilize a customized superpixel-based random walk mechanism to trim the boundaries. Finally, we design a novel network architecture, namely, a dual-feedback network (DFN), to integrate the two mechanisms into a unified model. Experiments on benchmark datasets show that DFN outperforms the existing state-of-the-art methods in terms of intersection-over-union (mIoU).Deep models have shown to be vulnerable to catastrophic forgetting, a phenomenon that the recognition performance on old data degrades when a pre-trained model is fine-tuned on new data. Knowledge distillation (KD) is a popular incremental approach to alleviate catastrophic forgetting. However, it usually fixes the absolute values of neural responses for isolated historical instances, without considering the intrinsic structure of the responses by a convolutional neural network (CNN) model. To overcome this limitation, we recognize the importance of the global property of the whole instance set and treat it as a behavior characteristic of a CNN model relevant to model incremental learning. On this basis 1) we design an instance neighborhood-preserving (INP) loss to maintain the order of pair-wise instance similarities of the old model in the feature space; 2) we devise a label priority-preserving (LPP) loss to preserve the label ranking lists within instance-wise label probability vectors in the output space; and 3) we introduce an efficient derivable ranking algorithm for calculating the two loss functions. Extensive experiments conducted on CIFAR100 and ImageNet show that our approach achieves the state-of-the-art performance.In this paper, we explore using the data-centric approach to tackle the Multiple Sequence Alignment construction problem. Unlike the algorithm-centric approach, which reduces the construction problem to a combinatorial optimisation problem based on some abstract model, the data-centric approach explores using classifiers trained from existing benchmark data to guide the construction. We have identified two simple classifications which help us construct better alignment. And we show that shadow machine learning algorithms suffice to train sensitive models for these classifications. Based on these models, we have implemented a new multiple sequence alignment pipeline called MLProbs. When compared with ten other popular alignment tools over four benchmark databases (namely, BAliBASE, OXBench, OXBench-X and SABMark), MLProbs consistently gives the highest TC score among all tools. More importantly, MLProbs shows non-trivial improvement for protein families with low similarity; in particular, when evaluated against protein families with similarity no more than 50%, MLProbs achieves a TC score of 56.93, while the next best three tools are in the range of [55.41, 55.91] (increased by more than 1.8%). We also compare the performance of MLProbs and other MSA tools on Phylogenetic Tree Construction Analysis and Protein Secondary Structure Prediction and MLProbs also has the best performance.Due to inevitable noises introduced during scanning and quantization, 3D reconstruction via RGB-D sensors suffers from errors both in geometry and texture, leading to artifacts such as camera drifting, mesh distortion, texture ghosting, and blurriness. Given an imperfect reconstructed 3D model, most previous methods have focused on refining either geometry, texture, or camera pose. Consequently, different optimization schemes and objectives for optimizing each component have been used in previous joint optimization methods, forming a complicated system. In this paper, we propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework by enforcing consistency between the rendered results and the corresponding RGB-D inputs. Based on the unified framework, we introduce a joint optimization approach to fully exploit the inter-relationships among the three objective components, and describe an adaptive interleaving strategy to improve optimization stability and efficiency.