Kinneytran3626

Z Iurium Wiki

Imaging systems that integrate multiple modalities can reveal complementary anatomic and functional information as they exploit different contrast mechanisms, which have shown great application potential and advantages in preclinical studies. A portable and easy-to-use imaging probe will be more conducive to transfer to clinical practice. Here, we present a tri-modal ultrasonic (US), photoacoustic (PA), and thermoacoustic (TA) imaging system with an excitation-reception collinear probe. The acoustic field, light field, and electric field of the probe were designed to be coaxial, realizing homogeneous illumination and high-sensitivity detection at the same detection position. US images can provide detailed information about structures, PA images can delineate the morphology of blood vessels in tissues, and TA images can reveal dielectric properties of the tissues. Moreover, phantoms and in vivo human finger experiments were performed by the tri-modal imaging system to demonstrate its performance. The results show that the tri-modal imaging system with the proposed probe has the ability to detect small breast tumors with a radius of only 2.5 mm and visualize the anatomical structure of the finger in three dimensions. Our work confirms that the tri-modal imaging system equipped with a collinear probe can be applied to a variety of different scenarios, which lays a solid foundation for the application of the tri-modality system in clinical trials.In myocardial perfusion imaging with dynamic positron emission tomography (PET), direct parametric reconstruction from the projection data allows accurate modeling of the Poisson noise in the projection domain to provide more reliable estimate of the parametric images. In this study, we propose to incorporate a superior denoiser to efficiently suppress the unfavorable noise propagation during the direct reconstruction. The dictionary learning (DL) based sparse representation serves as a regularization term to constrain the intermediate K1 estimation. We rewrite the DL regularizer into a voxel-separable form to facilitate the decoupling of a DL penalized curve fitting from the reconstruction of dynamic frames. The nonlinear fitting is then solved by a damped Newton method with uniform initialization. PF-04620110 concentration Using simulated and patient 82Rb dynamic PET data, we study the performance of the proposed DL direct algorithm and quantitatively compare it with the indirect method with or without post-filtering, the direct reconstruction without regularization, and the quadratic penalty regularized direct algorithm. The DL regularized direct reconstruction achieves improved noise versus bias performance in the reconstructed K1 images as well as superior recovery of reduced myocardial blood flow defect. The dictionary learned from a 3D self-created hollow sphere image yields comparable results to those using the dictionary learned from the corresponding MR image. The uniform initialization has been shown to converge to similar K1 estimation to the result from initializing with the indirect reconstruction. To summarize, we demonstrate the potential of the proposed DL constrained direct parametric reconstruction in improving quantitative dynamic PET imaging.Action segmentation is the task of predicting the actions for each frame of a video. As obtaining the full annotation of videos for action segmentation is expensive, weakly supervised approaches that can learn only from transcripts are appealing. In this paper, we propose a novel end-to-end approach for weakly supervised action segmentation based on a two-branch neural network. The two branches of our network predict two redundant but different representations for action segmentation and we propose a novel mutual consistency (MuCon) loss that enforces the consistency of the two redundant representations. Using the MuCon loss together with a loss for transcript prediction, our proposed approach achieves the accuracy of state-of-the-art approaches while being 14 times faster to train and 20 times faster during inference. The MuCon loss proves beneficial even in the fully supervised setting.Recent works on plug-and-play image restoration have shown that a denoiser can implicitly serve as the image prior for model-based methods to solve many inverse problems. Such a property induces considerable advantages for plug-and-play image restoration when the denoiser is discriminatively learned via deep convolutional neural network (CNN) with large modeling capacity. However, while deeper and larger CNN models are rapidly gaining popularity, existing plug-and-play image restoration hinders its performance due to the lack of suitable denoiser prior. In order to push the limits of plug-and-play image restoration, we set up a benchmark deep denoiser prior by training a highly flexible and effective CNN denoiser. We then plug the deep denoiser prior as a modular part into a half quadratic splitting based iterative algorithm to solve various image restoration problems. We, meanwhile, provide a thorough analysis of parameter setting, intermediate results and empirical convergence to better understand the working mechanism. Experimental results on three representative image restoration tasks, including deblurring, super-resolution and demosaicing, demonstrate that the proposed plug-and-play image restoration with deep denoiser prior not only significantly outperforms other state-of-the-art model-based methods but also achieves competitive or even superior performance against state-of-the-art learning-based methods. The source code is available https//github.com/cszn/DPIR.This paper tackles the problem of training a deep convolutional neural network of both low-bitwidth weights and activations. Optimizing a low-precision network is challenging due to the non-differentiability of the quantizer, which may result in substantial accuracy loss. To address this, we propose three practical approaches, including (i) progressive quantization; (ii) stochastic precision; and (iii) joint knowledge distillation to improve the network training. First, for progressive quantization, we propose two schemes to progressively find good local minima. Specifically, we propose to first optimize a net with quantized weights and subsequently quantize activations. This is in contrast to the traditional methods which optimize them simultaneously. Furthermore, we propose a second scheme which gradually decreases the bit-width from high-precision to low-precision during training. Second, to alleviate the excessive training burden due to the multi-round training stages, we further propose a one-stage stochastic precision strategy to randomly sample and quantize sub-networks while keeping other parts in full-precision.

Autoři článku: Kinneytran3626 (Meier Bilde)