Lawbean1207

Z Iurium Wiki

Verze z 14. 10. 2024, 12:22, kterou vytvořil Lawbean1207 (diskuse | příspěvky) (Založena nová stránka s textem „We solve the STAR model by an alternating optimization algorithm. Each sub-problem is transformed into a vectorized least squares regression, with closed-f…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

We solve the STAR model by an alternating optimization algorithm. Each sub-problem is transformed into a vectorized least squares regression, with closed-form solutions. Comprehensive experiments on commonly tested datasets demonstrate that, the proposed STAR model produce better quantitative and qualitative performance than previous competing methods, on illumination and reflectance decomposition, low-light image enhancement, and color correction. The code is publicly available at https//github.com/csjunxu/STAR.Sparse coding has achieved a great success in various image processing tasks. However, a benchmark to measure the sparsity of image patch/group is missing since sparse coding is essentially an NP-hard problem. This work attempts to fill the gap from the perspective of rank minimization. We firstly design an adaptive dictionary to bridge the gap between group-based sparse coding (GSC) and rank minimization. Then, we show that under the designed dictionary, GSC and the rank minimization problems are equivalent, and therefore the sparse coefficients of each patch group can be measured by estimating the singular values of each patch group. We thus earn a benchmark to measure the sparsity of each patch group because the singular values of the original image patch groups can be easily computed by the singular value decomposition (SVD). This benchmark can be used to evaluate performance of any kind of norm minimization methods in sparse coding through analyzing their corresponding rank minimization counterparts. Towards this end, we exploit four well-known rank minimization methods to study the sparsity of each patch group and the weighted Schatten p-norm minimization (WSNM) is found to be the closest one to the real singular values of each patch group. Inspired by the aforementioned equivalence regime of rank minimization and GSC, WSNM can be translated into a non-convex weighted ℓp-norm minimization problem in GSC. By using the earned benchmark in sparse coding, the weighted ℓp-norm minimization is expected to obtain better performance than the three other norm minimization methods, i.e., ℓ1-norm, ℓp-norm and weighted ℓ1-norm. Glesatinib purchase To verify the feasibility of the proposed benchmark, we compare the weighted ℓp-norm minimization against the three aforementioned norm minimization methods in sparse coding. Experimental results on image restoration applications, namely image inpainting and image compressive sensing recovery, demonstrate that the proposed scheme is feasible and outperforms many state-of-the-art methods.In clinical applications of super-resolution ultrasound imaging it is often not possible to achieve a full reconstruction of the microvasculature within a limited measurement time. This makes the comparison of studies and quantitative parameters of vascular morphology and perfusion difficult. Therefore, saturation models were proposed to predict adequate measurement times and estimate the degree of vessel reconstruction. Here we derive a statistical model for the microbubble counts in super-resolution voxels by a zero-inflated Poisson (ZIP) process. In this model, voxels either belong to vessels with probability Pv and count events with Poisson rate , or they are empty and remain zero. In this model,Pv represents the vessel voxel density in the super-resolution image after infinite measurement time. For the parameters Pv and we give Cramir-Rao lower bounds (CRLB) for the estimation variance and derive maximum likelihood estimators (MLE) in a novel closed-form solution. These can be calculated with knowledge of only the counts at the end of the acquisition time. The estimators are applied to preclinical and clinical data and the MLE outperforms alternative estimators proposed before. The estimated degree of reconstruction lies between 38% and 74% after less than 90 s. Vessel probability Pv ranged from 4% - 20%. The rate parameter was estimated in the range of 0.5-1.3 microbubbles/voxel. For these parameter ranges, the CRLB gives standard deviations of less than 2%, which supports that the parameters can be estimated with good precision already for limited acquisition times.Tracking the myotendinous junction (MTJ) in consecutive ultrasound images is crucial for assessing the mechanics and pathological conditions of the muscle-tendon unit. However, poor image quality and boundary ambiguity conspire towards a lack of reliable and efficient identification of MTJ, restricting its application in motion analysis. In recent years, with the rapid development of deep learning, the region-based convolution neural network (RCNN) has shown great potential in the field of simultaneous objection detection and instance segmentation in medical images. This paper proposes a regionadaptive network, called RAN, to adaptively localize MTJ region and segment it in a single shot. Our model learns salient information of MTJ with a composite architecture, in which a region-based multi-task learning network explores the region containing MTJ, while a parallel end-to-end U-shape path extracts the MTJ structure from the adaptively selected region for combating data imbalance and boundary ambiguity. By demonstrating on ultrasound images of the gastrocnemius, we showed that the RAN achieves superior segmentation performance compared to the state-of-the-art Mask RCNN method with average Dice scores of 80.1%. Our method is promising in advancing muscle and tendon function examinations with ultrasound imaging.Vascular tree disentanglement and vessel type classification are two crucial steps of the graph-based method for retinal artery-vein (A/V) separation. Existing approaches treat them as two independent tasks and mostly rely on ad hoc rules (e.g. change of vessel directions) and hand-crafted features (e.g. color, thickness) to handle them respectively. However, we argue that the two tasks are highly correlated and should be handled jointly since knowing the A/V type can unravel those highly entangled vascular trees, which in turn helps to infer the types of connected vessels that are hard to classify based on only appearance. Therefore, designing features and models isolatedly for the two tasks often leads to a suboptimal solution of A/V separation. In view of this, this paper proposes a multi-task siamese network which aims to learn the two tasks jointly and thus yields more robust deep features for accurate A/V separation. Specifically, we first introduce Convolution Along Vessel (CAV) to extract the visual features by convolving a fundus image along vessel segments, and the geometric features by tracking the directions of blood flow in vessels.

Autoři článku: Lawbean1207 (Collier Horne)