Figueroabarbour0007

Z Iurium Wiki

Verze z 24. 10. 2024, 13:30, kterou vytvořil Figueroabarbour0007 (diskuse | příspěvky) (Založena nová stránka s textem „Numerical simulation demonstrates the performance of the proposed strategy.Object clustering has received considerable research attention most recently. Ho…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Numerical simulation demonstrates the performance of the proposed strategy.Object clustering has received considerable research attention most recently. However, 1) most existing object clustering methods utilize visual information while ignoring important tactile modality, which would inevitably lead to model performance degradation and 2) simply concatenating visual and tactile information via multiview clustering method can make complementary information to not be fully explored, since there are many differences between vision and touch. To address these issues, we put forward a graph-based visual-tactile fused object clustering framework with two modules 1) a modality-specific representation learning module MR and 2) a unified affinity graph learning module MU. Specifically, MR focuses on learning modality-specific representations for visual-tactile data, where deep non-negative matrix factorization (NMF) is adopted to extract the hidden information behind each modality. Meanwhile, we employ an autoencoder-like structure to enhance the robustness of the learned representations, and two graphs to improve its compactness. Furthermore, MU highlights how to mitigate the differences between vision and touch, and further maximize the mutual information, which adopts a minimizing disagreement scheme to guide the modality-specific representations toward a unified affinity graph. To achieve ideal clustering performance, a Laplacian rank constraint is imposed to regularize the learned graph with ideal connected components, where noises that caused wrong connections are removed and clustering labels can be obtained directly. Finally, we propose an efficient alternating iterative minimization updating strategy, followed by a theoretical proof to prove framework convergence. Comprehensive experiments on five public datasets demonstrate the superiority of the proposed framework.By training different models and averaging their predictions, the performance of the machine-learning algorithm can be improved. The performance optimization of multiple models is supposed to generalize further data well. This requires the knowledge transfer of generalization information between models. In this article, a multiple kernel mutual learning method based on transfer learning of combined mid-level features is proposed for hyperspectral classification. Three-layer homogenous superpixels are computed on the image formed by PCA, which is used for computing mid-level features. The three mid-level features include 1) the sparse reconstructed feature; 2) combined mean feature; and 3) uniqueness. The sparse reconstruction feature is obtained by a joint sparse representation model under the constraint of three-scale superpixels' boundaries and regions. The combined mean features are computed with average values of spectra in multilayer superpixels, and the uniqueness is obtained by the superposed manifold ranking values of multilayer superpixels. Next, three kernels of samples in different feature spaces are computed for mutual learning by minimizing the divergence. Then, a combined kernel is constructed to optimize the sample distance measurement and applied by employing SVM training to build classifiers. Experiments are performed on real hyperspectral datasets, and the corresponding results demonstrated that the proposed method can perform significantly better than several state-of-the-art competitive algorithms based on MKL and deep learning.People can infer the weather from clouds. Various weather phenomena are linked inextricably to clouds, which can be observed by meteorological satellites. Navitoclax in vivo Thus, cloud images obtained by meteorological satellites can be used to identify different weather phenomena to provide meteorological status and future projections. How to classify and recognize cloud images automatically, especially with deep learning, is an interesting topic. Generally speaking, large-scale training data are essential for deep learning. However, there is no such cloud images database to date. Thus, we propose a large-scale cloud image database for meteorological research (LSCIDMR). To the best of our knowledge, it is the first publicly available satellite cloud image benchmark database for meteorological research, in which weather systems are linked directly with the cloud images. LSCIDMR contains 104,390 high-resolution images, covering 11 classes with two different annotation methods 1) single-label annotation and 2) multiple-label annotation, called LSCIDMR-S and LSCIDMR-M, respectively. The labels are annotated manually, and we obtain a total of 414,221 multiple labels and 40,625 single labels. Several representative deep learning methods are evaluated on the proposed LSCIDMR, and the results can serve as useful baselines for future research. Furthermore, experimental results demonstrate that it is possible to learn effective deep learning models from a sufficiently large image database for the cloud image classification.Clustering is one of the fundamental tasks in computer vision and pattern recognition. Recently, deep clustering methods (algorithms based on deep learning) have attracted wide attention with their impressive performance. Most of these algorithms combine deep unsupervised representation learning and standard clustering together. However, the separation of representation learning and clustering will lead to suboptimal solutions because the two-stage strategy prevents representation learning from adapting to subsequent tasks (e.g., clustering according to specific cues). To overcome this issue, efforts have been made in the dynamic adaption of representation and cluster assignment, whereas current state-of-the-art methods suffer from heuristically constructed objectives with the representation and cluster assignment alternatively optimized. To further standardize the clustering problem, we audaciously formulate the objective of clustering as finding a precise feature as the cue for cluster assignment. Based on this, we propose a general-purpose deep clustering framework, which radically integrates representation learning and clustering into a single pipeline for the first time. The proposed framework exploits the powerful ability of recently developed generative models for learning intrinsic features, and imposes an entropy minimization on the distribution of the cluster assignment by a dedicated variational algorithm. The experimental results show that the performance of the proposed method is superior, or at least comparable to, the state-of-the-art methods on the handwritten digit recognition, fashion recognition, face recognition, and object recognition benchmark datasets.

Autoři článku: Figueroabarbour0007 (Tuttle Toft)