Tennantlyhne2520

Z Iurium Wiki

Verze z 8. 10. 2024, 14:43, kterou vytvořil Tennantlyhne2520 (diskuse | příspěvky) (Založena nová stránka s textem „A fully connected layer was trained on the intermediate feature representation to classify instrument-tissue interaction. RESULTS The perception study reve…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

A fully connected layer was trained on the intermediate feature representation to classify instrument-tissue interaction. RESULTS The perception study revealed that acoustic feedback has potential to improve the perception during MIS and to serve as a basis for further automated analysis. The proposed classification pipeline yielded excellent performance for four types of instrument-tissue interaction (muscle, fascia, liver and fatty tissue) and achieved top-1 accuracies of up to 89.9%. Moreover, our model is able to distinguish electrosurgical operation modes with an overall classification accuracy of 86.40%. CONCLUSION Our proof-of-principle indicates great application potential for guidance systems in MIS, such as controlled tissue resection. Supported by a pilot perception study with surgeons, we believe that utilizing audio signals as an additional information channel has great potential to improve the surgical performance and to partly compensate the loss of haptic feedback.PURPOSE For laparoscopic ablation to be successful, accurate placement of the needle to the tumor is essential. Laparoscopic ultrasound is an essential tool to guide needle placement, but the ultrasound image is generally presented separately from the laparoscopic image. We aim to evaluate an augmented reality (AR) system which combines laparoscopic ultrasound image, laparoscope video, and the needle trajectory in a unified view. METHODS We created a tissue phantom made of gelatin. Artificial tumors represented by plastic spheres were secured in the gelatin at various depths. The top point of the sphere surface was our target, and its 3D coordinates were known. The participants were invited to perform needle placement with and without AR guidance. Once the participant reported that the needle tip had reached the target, the needle tip location was recorded and compared to the ground truth location of the target, and the difference was the target localization error (TLE). FRAX486 The time of the needle placement was also recorded. We further tested the technical feasibility of the AR system in vivo on a 40-kg swine. RESULTS The AR guidance system was evaluated by two experienced surgeons and two surgical fellows. The users performed needle placement on a total of 26 targets, 13 with AR and 13 without (i.e., the conventional approach). The average TLE for the conventional and the AR approaches was 14.9 mm and 11.1 mm, respectively. The average needle placement time needed for the conventional and AR approaches was 59.4 s and 22.9 s, respectively. For the animal study, ultrasound image and needle trajectory were successfully fused with the laparoscopic video in real time and presented on a single screen for the surgeons. CONCLUSION By providing projected needle trajectory, we believe our AR system can assist the surgeon with more efficient and precise needle placement.PURPOSE Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools. METHODS As manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases. RESULTS The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos. CONCLUSIONS The results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.PURPOSE Basal cell carcinoma (BCC) is the most commonly diagnosed cancer and the number of diagnosis is growing worldwide due to increased exposure to solar radiation and the aging population. Reduction of positive margin rates when removing BCC leads to fewer revision surgeries and consequently lower health care costs, improved cosmetic outcomes and better patient care. In this study, we propose the first use of a perioperative mass spectrometry technology (iKnife) along with a deep learning framework for detection of BCC signatures from tissue burns. METHODS Resected surgical specimen were collected and inspected by a pathologist. With their guidance, data were collected by burning regions of the specimen labeled as BCC or normal, with the iKnife. Data included 190 scans of which 127 were normal and 63 were BCC. A data augmentation approach was proposed by modifying the location and intensity of the peaks of the original spectra, through noise addition in the time and frequency domains. A symmetric autoencoder was built by simultaneously optimizing the spectral reconstruction error and the classification accuracy. Using t-SNE, the latent space was visualized. RESULTS The autoencoder achieved an accuracy (standard deviation) of 96.62 (1.35%) when classifying BCC and normal scans, a statistically significant improvement over the baseline state-of-the-art approach used in the literature. The t-SNE plot of the latent space distinctly showed the separability between BCC and normal data, not visible with the original data. Augmented data resulted in significant improvements to the classification accuracy of the baseline model. CONCLUSION We demonstrate the utility of a deep learning framework applied to mass spectrometry data for surgical margin detection. We apply the proposed framework to an application with light surgical overhead and high incidence, the removal of BCC. The learnt models can accurately separate BCC from normal tissue.

Autoři článku: Tennantlyhne2520 (Shapiro Riley)