Christoffersenzimmermann8180

Z Iurium Wiki

Verze z 14. 8. 2024, 00:34, kterou vytvořil Christoffersenzimmermann8180 (diskuse | příspěvky) (Založena nová stránka s textem „Linear discriminant analysis (LDA) has been widely used as the technique of feature exaction. However, LDA may be invalid to address the data from differen…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Linear discriminant analysis (LDA) has been widely used as the technique of feature exaction. However, LDA may be invalid to address the data from different domains. The reasons are as follows 1) the distribution discrepancy of data may disturb the linear transformation matrix so that it cannot extract the most discriminative feature and 2) the original design of LDA does not consider the unlabeled data so that the unlabeled data cannot take part in the training process for further improving the performance of LDA. To address these problems, in this brief, we propose a novel transferable LDA (TLDA) method to extend LDA into the scenario in which the data have different probability distributions. The whole learning process of TLDA is driven by the philosophy that the data from the same subspace have a low-rank structure. The matrix rank in TLDA is the key learning criterion to conduct local and global linear transformations for restoring the low-rank structure of data from different distributions and enlarging the distances among different subspaces. JAK inhibitor In doing so, the variations of distribution discrepancy within the same subspace can be reduced, i.e., data can be aligned well and the maximally separated structure can be achieved for the data from different subspaces. A simple projected subgradient-based method is proposed to optimize the objective of TLDA, and a strict theory proof is provided to guarantee a quick convergence. The experimental evaluation on public data sets demonstrates that our TLDA can achieve better classification performance and outperform the state-of-the-art methods.Atrial Fibrillation (AF) the most commonly occurring type of cardiac arrhythmia is one of the main causes of morbidity and mortality worldwide. The timely diagnosis of AF is an equally important and challenging task because of its asymptomatic and episodic nature. n this paper, state-of-the-art ECG data-based machine learning models and signal processing techniques applied for auto diagnosis of AF are reviewed. Moreover, key biomarkers of AF on ECG and the common methods and equipment used for the collection of ECG data are discussed. Besides that, the modern wearable and implantable ECG sensing technologies used for gathering AF data are presented briefly. In the end, key challenges associated with the development of auto diagnosis solutions of AF are also highlighted. It is the first review paper of its kind that comprehensively presents a discussion on all these aspects related to AF auto-diagnosis at one place. It is observed that there is dire need of low energy, low cost but accurate auto diagnosis solutions for the proactive management of AF.There is widespread interest in estimating the fluorescence properties of natural materials in an image. However, the separation between reflected and fluoresced components is difficult, because it is impossible to distinguish reflected and fluoresced photons without controlling the illuminant spectrum. We show how to jointly estimate the reflectance and fluorescence from a single set of images acquired under multiple illuminants. We present a framework based on a linear approximation to the physical equations describing image formation in terms of surface spectral reflectance and fluorescence due to multiple fluorophores. We relax the non-convex, inverse estimation problem in order to jointly estimate the reflectance and fluorescence properties in a single optimization step. We provide a software implementation of the solver for our method and prior methods. We evaluate the accuracy and reliability of the method using both simulations and experimental data. link2 To evaluate the methods experimentally we built a custom imaging system using a monochrome camera, a filter wheel with bandpass transmissive filters and a small number of light emitting diodes. We compared the methods based upon our framework with the ground truth as well as with prior methods.Image denoising technologies in a Euclidean domain have achieved good results and are becoming mature. However, in recent years, many real-world applications encountered in computer vision and geometric modeling involve image data defined in irregular domains modeled by huge graphs, which results in the problem on how to solve image denoising problems defined on graphs. In this paper, we propose a novel model for removing mixed or unknown noise in images on graphs. The objective is to minimize the sum of a weighted fidelity term and a sparse regularization term that additionally utilizes wavelet frame transform on graphs to retain feature details of images defined on graphs. Specifically, the weighted fidelity term with ℓ1-norm and ℓ2-norm is designed based on a analysis of the distribution of mixed noise. The augmented Lagrangian and accelerated proximal gradient methods are employed to achieve the optimal solution to the problem. Finally, some supporting numerical results and comparative analyses with other denoising algorithms are provided. It is noted that we investigate image denoising with unknown noise or a wide range of mixed noise, especially the mixture of Poisson, Gaussian, and impulse noise. Experimental results reported for synthetic and real images on graphs demonstrate that the proposed method is effective and efficient, and exhibits better performance for the removal of mixed or unknown noise in images on graphs than other denoising algorithms in the literature. The method can effectively remove mixed or unknown noise and retain feature details of images on graphs. It delivers a new avenue for denoising images in irregular domains.Spectral or spatial dictionary has been widely used in fusing low-spatial-resolution hyperspectral (LH) images and high-spatial-resolution multispectral (HM) images. However, only using spectral dictionary is insufficient for preserving spatial information, and vice versa. To address this problem, a new LH and HM image fusion method termed OTD using optimized twin dictionaries is proposed in this paper. The fusion problem of OTD is formulated analytically in the framework of sparse representation, as an optimization of twin spectral-spatial dictionaries and their corresponding sparse coefficients. More specifically, the spectral dictionary representing the generalized spectrums and its spectral sparse coefficients are optimized by utilizing the observed LH and HM images in the spectral domain; and the spatial dictionary representing the spatial information and its spatial sparse coefficients are optimized by modeling the rest of high-frequency information in the spatial domain. In addition, without non-negative constraints, the alternating direction methods of multipliers (ADMM) are employed to implement the above optimization process. Comparison results with the related state-of-the-art fusion methods on various datasets demonstrate that our proposed OTD method achieves a better fusion performance in both spatial and spectral domains.In recent years, deep learning has been successfully applied to the analysis and processing of ultrasound images. To date, most of this research has focused on segmentation and view recognition. This paper benchmarks different convolutional neural network algorithms for motion estimation in ultrasound imaging. We evaluated and compared several networks derived from FlowNet2, one of the most efficient architectures in computer vision. The networks were tested with and without transfer learning and the best configuration was compared against the particle-imaging-velocimetry method, a popular state-of-the-art block-matching algorithm. Rotations are known to be difficult to track from ultrasound images due to a significant speckle decorrelation. We thus focused on images of rotating disks, that could be tracked through speckle features only. Our database consisted of synthetic and in-vitro B-mode images after log-compression, and covered a large range of rotational speeds. One of the FlowNet2 sub-networks, FlowNet2SD, produced competitive results with a motion field error smaller than 1 pixel on real data after transfer learning based on simulated data. These errors remains small for a large velocity range without the need for hyper-parameter tuning, which indicates the high potential and adaptability of deep learning solutions to motion estimation in ultrasound imaging.Dynamic functional connectivity (dFC) analysis using resting-state functional Magnetic Resonance Imaging (rs-fMRI) is currently an advanced technique for capturing the dynamic changes of neural activities in brain disease identification. Most existing dFC modeling methods extract dynamic interaction information by using the sliding window-based correlation, whose performance is very sensitive to window parameters. Because few studies can convincingly identify the optimal combination of window parameters, sliding window-based correlation may not be the optimal way to capture the temporal variability of brain activity. In this paper, we propose a novel adaptive dFC model, aided by a deep spatial-temporal feature fusion method, for mild cognitive impairment (MCI) identification. Specifically, we adopt an adaptive Ultra-weighted-lasso recursive least squares algorithm to estimate the adaptive dFC, which effectively alleviates the problem of parameter optimization. link3 Then, we extract temporal and spatial features from the adaptive dFC. In order to generate coarser multi-domain representations for subsequent classification, the temporal and spatial features are further mapped into comprehensive fused features with a deep feature fusion method. Experimental results show that the classification accuracy of our proposed method is reached to 87.7%, which is at least 5.5% improvement than the state-of-the-art methods. These results elucidate the superiority of the proposed method for MCI classification, indicating its effectiveness in the early identification of brain abnormalities.Energy-resolved computed tomography (ErCT) with a photon counting detector concurrently produces multiple CT images corresponding to different photon energy ranges. It has the potential to generate energy-dependent images with improved contrast-to-noise ratio and sufficient material-specific information. Since the number of detected photons in one energy bin in ErCT is smaller than that in conventional energy-integrating CT (EiCT), ErCT images are inherently more noisy than EiCT images, which leads to increased noise and bias in the subsequent material estimation. In this work, we first deeply analyze the intrinsic tensor properties of two-dimensional (2D) ErCT images acquired in different energy bins and then present a Full-Spectrum-knowledge-aware Tensor analysis and processing (FSTensor) method for ErCT reconstruction to suppress noise-induced artifacts to obtain high-quality ErCT images and high-accuracy material images. The presented method is based on three considerations (1) 2D ErCT images obtained in principal component analysis, tensor-based dictionary learning and low-rank tensor decomposition with spatial-temporal total variation methods.In conventional focused beamforming (CFB), there is a known tradeoff between the active aperture size of the ultrasound transducer array and the resulting image quality. Increasing the size of the active aperture leads to an increase in the image quality of the ultrasound system at the expense of increased system cost. An alternate approach is to get rid of the requirement of having consecutive active receive elements and instead place them in a random order in a larger aperture. This, in turn, creates an undersampled situation where there are only M active elements placed in a larger aperture, which can accommodate N consecutive receive elements (with ). It is possible to formulate and solve the above-mentioned undersampling situation using a compressed sensing (CS) approach. In our previous work, we had proposed Gaussian undersampling strategy for reducing the number of active receive elements. In this work, we introduce a novel framework, namely Gaussian undersampling-based CS framework (GAUCS) with wave atoms as a sparsifying basis for CFB imaging method.

Autoři článku: Christoffersenzimmermann8180 (Raynor Hodges)