Jarvislawson7850
In a low-statistics PET imaging context, the positive bias in regions of low activity is a burning issue. To overcome this problem, algorithms without the built-in non-negativity constraint may be used. They allow negative voxels in the image to reduce, or even to cancel the bias. However, such algorithms increase the variance and are difficult to interpret since the resulting images contain negative activities, which do not hold a physical meaning when dealing with radioactive concentration. In this paper, a post-processing approach is proposed to remove these negative values while preserving the local mean activities. Its original idea is to transfer the value of each voxel with negative activity to its direct neighbors under the constraint of preserving the local means of the image. In that respect, the proposed approach is formalized as a linear programming problem with a specific symmetric structure, which makes it solvable in a very efficient way by a dual-simplex-like iterative algorithm. The relevance of the proposed approach is discussed on simulated and on experimental data. Acquired data from an yttrium-90 phantom show that on images produced by a non-constrained algorithm, a much lower variance in the cold area is obtained after the post-processing step, at the price of a slightly increased bias. More specifically, when compared with the classical OSEM algorithm, images are improved, both in terms of bias and of variance.Convolutional neural networks (CNN) have had unprecedented success in medical imaging and, in particular, in medical image segmentation. this website However, despite the fact that segmentation results are closer than ever to the inter-expert variability, CNNs are not immune to producing anatomically inaccurate segmentations, even when built upon a shape prior. In this paper, we present a framework for producing cardiac image segmentation maps that are guaranteed to respect pre-defined anatomical criteria, while remaining within the inter-expert variability. The idea behind our method is to use a well-trained CNN, have it process cardiac images, identify the anatomically implausible results and warp these results toward the closest anatomically valid cardiac shape. This warping procedure is carried out with a constrained variational autoencoder (cVAE) trained to learn a representation of valid cardiac shapes through a smooth, yet constrained, latent space. With this cVAE, we can project any implausible shape into the cardiac latent space and steer it toward the closest correct shape. We tested our framework on short-axis MRI as well as apical two and four-chamber view ultrasound images, two modalities for which cardiac shapes are drastically different. With our method, CNNs can now produce results that are both within the inter-expert variability and always anatomically plausible without having to rely on a shape prior.Fast and automated image quality assessment (IQA) of diffusion MR images is crucial for making timely decisions for rescans. However, learning a model for this task is challenging as the number of annotated data is limited and the annotation labels might not always be correct. As a remedy, we will introduce in this paper an automatic image quality assessment (IQA) method based on hierarchical non-local residual networks for pediatric diffusion MR images. Our IQA is performed in three sequential stages, i.e., 1) slice-wise IQA, where a nonlocal residual network is first pre-trained to annotate each slice with an initial quality rating (i.e., pass/questionable/fail), which is subsequently refined via iterative semi-supervised learning and slice self-training; 2) volume-wise IQA, which agglomerates the features extracted from the slices of a volume, and uses a nonlocal network to annotate the quality rating for each volume via iterative volume self-training; and 3) subject-wise IQA, which ensembles the volumetric IQA results to determine the overall image quality pertaining to a subject. Experimental results demonstrate that our method, trained using only samples of modest size, exhibits great generalizability, and is capable of conducting rapid hierarchical IQA with near-perfect accuracy.In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals. Geometric information within this process is usually depending on the system setting only, i.e., the scanner position or readout direction. Patient motion therefore corrupts the geometry alignment in the reconstruction process resulting in motion artifacts. We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object. To this end, we train a siamese triplet network to predict the reprojection error (RPE) for the complete acquisition as well as an approximate distribution of the RPE along the single views from the reconstructed volume in a multi-task learning approach. The RPE measures the motion-induced geometric deviations independent of the object based on virtual marker positions, which are available during training. We train our network using 27 patients and deploy a 21-4-2 split for training, validation and testing. In average, we achieve a residual mean RPE of 0.013mm with an inter-patient standard deviation of 0.022mm. This is twice the accuracy compared to previously published results. In a motion estimation benchmark the proposed approach achieves superior results in comparison with two state-of-the-art measures in nine out of twelve experiments. The clinical applicability of the proposed method is demonstrated on a motion-affected clinical dataset.In many medical imaging and classical computer vision tasks, the Dice score and Jaccard index are used to evaluate the segmentation performance. Despite the existence and great empirical success of metric-sensitive losses, i.e. relaxations of these metrics such as soft Dice, soft Jaccard and Lovász-Softmax, many researchers still use per-pixel losses, such as (weighted) cross-entropy to train CNNs for segmentation. Therefore, the target metric is in many cases not directly optimized. We investigate from a theoretical perspective, the relation within the group of metric-sensitive loss functions and question the existence of an optimal weighting scheme for weighted cross-entropy to optimize the Dice score and Jaccard index at test time. We find that the Dice score and Jaccard index approximate each other relatively and absolutely, but we find no such approximation for a weighted Hamming similarity. For the Tversky loss, the approximation gets monotonically worse when deviating from the trivial weight setting where soft Tversky equals soft Dice.