Maldonadopotts0423

Z Iurium Wiki

Verze z 19. 11. 2024, 14:44, kterou vytvořil Maldonadopotts0423 (diskuse | příspěvky) (Založena nová stránka s textem „In this paper, we propose a novel framework for time delay estimation in ultrasound elastography. In the presence of high acquisition noise, the state-of-t…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

In this paper, we propose a novel framework for time delay estimation in ultrasound elastography. In the presence of high acquisition noise, the state-of-the-art motion tracking techniques suffer from inaccurate estimation of displacement field. To resolve this issue, instead of one, we collect several ultrasound Radio-Frequency (RF) frames from both pre- and post-deformed scan planes to better investigate the data statistics. click here We formulate a non-linear cost function incorporating all observation frames from both levels of deformations. Beside data similarity, we impose axial and lateral continuity to exploit the prior information of spatial coherence. Most importantly, we consider the continuity among the displacement estimates obtained from different observation RF frames. This novel continuity constraint mainly contributes to the robustness of the proposed technique to high noise power. We efficiently optimize the aforementioned cost function to derive a sparse system of linear equations where we solve for millions of variables to estimate the displacement of all samples of all of the incorporated RF frames simultaneously. We call the proposed algorithm GLobal Ultrasound Elastography using multiple observations (mGLUE). Our primary validation of mGLUE against soft and hard inclusion simulation phantoms proves that mGLUE is capable of obtaining high quality strain map while dealing with noisy ultrasound data. In case of the soft inclusion phantom, Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR) have improved by 75.37% and 57.08%, respectively. In addition, SNR and CNR improvements of 32.19% and 38.57% have been observed for the hard inclusion case.Breast-conserving surgery, also known as lumpectomy, is an early stage breast cancer treatment that aims to spare as much healthy breast tissue as possible. A risk associated with lumpectomy is the presence of cancer positive margins post operation. Surgical navigation has been shown to reduce cancer positive margins but requires manual segmentation of the tumor intraoperatively. In this paper, we propose an end-to-end solution for automatic contouring of breast tumor from intraoperative ultrasound images using two convolutional neural network architectures, the U-Net and residual U-Net. The networks are trained on annotated intraoperative breast ultrasound images and evaluated on the quality of predicted segmentations. This work brings us one step closer to providing surgeons with an automated surgical navigation system that helps reduce cancer-positive margins during lumpectomy.This work proposes an automated algorithms for classifying retinal fundus images as cytomegalovirus retinitis (CMVR), normal, and other diseases. Adaptive wavelet packet transform (AWPT) was used to extract features. The retinal fundus images were transformed using a 4-level Haar wavelet packet (WP) transform. The first two best trees were obtained using Shannon and log energy entropy, while the third best tree was obtained using the Daubechies-4 mother wavelet with Shannon entropy. The coefficients of each node were extracted, where the feature value of each leaf node of the best tree was the average of the WP coefficients in that node, while those of other non-leaf nodes were set to zero. The feature vector was classified using an artificial neural network (ANN). The effectiveness of the algorithm was evaluated using ten-fold cross-validation over a dataset consisting of 1,011 images (310 CMVR, 240 normal, and 461 other diseases). In testing, a dataset consisting of 101 images (31 CMVR, 24 normal, and 46 other diseases), the AWPT-based ANN had sensitivities of 90.32%, 83.33%, and 91.30% and specificities of 95.71%, 94.81%, and 92.73%. In conclusion, the proposed algorithm has promising potential in CMVR screening, for which the AWPT-based ANN is applicable with scarce data and limited resources.Diabetic Retinopathy (DR), the complication leading to vision loss, is generally graded according to the amalgamation of various structural factors in fundus photography such as number of microaneurysms, hemorrhages, vascular abnormalities, etc. To this end, Convolution Neural Network (CNN) with impressively representational power has been exhaustively utilized to address this problem. However, while existing multi-stream networks are costly, the conventional CNNs do not consider multiple levels of semantic context, which suffers from the loss of spatial correlations between the aforementioned DR-related signs. Therefore, this paper proposes a Densely Reversed Attention based CNN (DRAN) to leverage the learnable integration of channel-wise attention at multi-level features in a pretrained network for unambiguously involving spatial representations of important DR-oriented factors. Consequently, the proposed approach gains a quadratic weighted kappa of 85.6% on Kaggle DR detection dataset, which is competitive with the state-of-the-arts.In this work, we demonstrate a novel approach to assessing the risk of Diabetic Peripheral Neuropathy (DPN) using only the retinal images of the patients. Our methodology consists of convolutional neural network feature extraction, dimensionality reduction and feature selection with random projections, combination of image features to case-level representations, and the training and testing of a support vector machine classifier. Using clinical diagnosis as ground truth for DPN, we achieve an overall accuracy of 89% on a held-out test set, with sensitivity reaching 78% and specificity reaching 95%.Fundus image is commonly used in aiding the diagnosis of ophthalmic diseases. A high-resolution (HR) image is valuable to provide the anatomic information on the eye conditions. Recently, image super-resolution (SR) though learning model has been shown to be an economic yet effective way to satisfy the high demands in the clinical practice. However, the reported methods ignore the mutual dependencies of low-and high-resolution images and did not fully exploit the dependencies between channels. To tackle with the drawbacks, we propose a novel network for fundus image SR, named by Fundus Cascaded Channel-wise Attention Network (FC-CAN). The proposed FCCAN cascades channel attention module and dense module jointly to exploit the semantic interdependencies both frequency and domain information across channels. The channel attention module rescales channel maps in spatial domain, while the dense module preserves the HR components by up- and down-sampling operation. Experimental results demonstrate the superiority of our net-work in comparison with the six methods.

Autoři článku: Maldonadopotts0423 (Burke Thomas)