Kokholmwiberg4114

Z Iurium Wiki

1 mm, demonstrating good performance in these metrics when compared to literature results. Our preliminary results suggest that our deep learning-based method can be effective in automating RV segmentation.Hyperspectral imaging (HSI) is a promising optical imaging technique for cancer detection. However, quantitative methods need to be developed in order to utilize the rich spectral information and subtle spectral variation in such images. In this study, we explore the feasibility of using wavelet-based features from in vivo hyperspectral images for head and neck cancer detection. Hyperspectral reflectance data were collected from 12 mice bearing head and neck cancer. Catenation of 5-level wavelet decomposition outputs of hyperspectral images was used as a feature for tumor discrimination. A support vector machine (SVM) was utilized as the classifier. Seven types of mother wavelets were tested to select the one with the best performance. Classifications with raw reflectance spectra, 1-level wavelet decomposition output, and 2-level wavelet decomposition output, as well as the proposed feature were carried out for comparison. Our results show that the proposed wavelet-based feature yields better classification accuracy, and that using different type and order of mother wavelet achieves different classification results. The wavelet-based classification method provides a new approach for HSI detection of head and neck cancer in the animal model.Kidney biopsies are currently performed using preoperative imaging to identify the lesion of interest and intraoperative imaging used to guide the biopsy needle to the tissue of interest. Often, these are not the same modalities forcing the physician to perform a mental cross-modality fusion of the preoperative and intraoperative scans. This limits the accuracy and reproducibility of the biopsy procedure. In this study, we developed an augmented reality system to display holographic representations of lesions superimposed on a phantom. This system allows the integration of preoperative CT scans with intraoperative ultrasound scans to better determine the lesion's real-time location. An automated deformable registration algorithm was used to increase the accuracy of the holographic lesion locations, and a magnetic tracking system was developed to provide guidance for the biopsy procedure. Our method achieved a targeting accuracy of 2.9 ± 1.5 mm in a renal phantom study.Pelvic trauma surgical procedures rely heavily on guidance with 2D fluoroscopy views for navigation in complex bone corridors. This "fluoro-hunting" paradigm results in extended radiation exposure and possible suboptimal guidewire placement from limited visualization of the fractures site with overlapped anatomy in 2D fluoroscopy. A novel computer vision-based navigation system for freehand guidewire insertion is proposed. The navigation framework is compatible with the rapid workflow in trauma surgery and bridges the gap between intraoperative fluoroscopy and preoperative CT images. The system uses a drill-mounted camera to detect and track poses of simple multimodality (optical/radiographic) markers for registration of the drill axis to fluoroscopy and, in turn, to CT. selleck compound Surgical navigation is achieved with real-time display of the drill axis position on fluoroscopy views and, optionally, in 3D on the preoperative CT. The camera was corrected for lens distortion effects and calibrated for 3D pose estimation. Custom marker jigs were constructed to calibrate the drill axis and tooltip with respect to the camera frame. A testing platform for evaluation of the navigation system was developed, including a robotic arm for precise, repeatable, placement of the drill. Experiments were conducted for hand-eye calibration between the drill-mounted camera and the robot using the Park and Martin solver. Experiments using checkerboard calibration demonstrated subpixel accuracy [-0.01 ± 0.23 px] for camera distortion correction. The drill axis was calibrated using a cylindrical model and demonstrated sub-mm accuracy [0.14 ± 0.70 mm] and sub-degree angular deviation.Segmentation of the uterine cavity and placenta in fetal magnetic resonance (MR) imaging is useful for the detection of abnormalities that affect maternal and fetal health. In this study, we used a fully convolutional neural network for 3D segmentation of the uterine cavity and placenta while a minimal operator interaction was incorporated for training and testing the network. The user interaction guided the network to localize the placenta more accurately. We trained the network with 70 training and 10 validation MRI cases and evaluated the algorithm segmentation performance using 20 cases. The average Dice similarity coefficient was 92% and 82% for the uterine cavity and placenta, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of 2% and 9%, respectively. The results demonstrate that the deep learning-based segmentation and volume estimation is possible and can potentially be useful for clinical applications of human placental imaging.Computer-assisted image segmentation techniques could help clinicians to perform the border delineation task faster with lower inter-observer variability. Recently, convolutional neural networks (CNNs) are widely used for automatic image segmentation. In this study, we used a technique to involve observer inputs for supervising CNNs to improve the accuracy of the segmentation performance. We added a set of sparse surface points as an additional input to supervise the CNNs for more accurate image segmentation. We tested our technique by applying minimal interactions to supervise the networks for segmentation of the prostate on magnetic resonance images. We used U-Net and a new network architecture that was based on U-Net (dual-input path [DIP] U-Net), and showed that our supervising technique could significantly increase the segmentation accuracy of both networks as compared to fully automatic segmentation using U-Net. We also showed DIP U-Net outperformed U-Net for supervised image segmentation. We compared our results to the measured inter-expert observer difference in manual segmentation.

Autoři článku: Kokholmwiberg4114 (Qvist Krause)