Catesfoley4392

Z Iurium Wiki

Qualitative evaluation of our segmentation results shows average success score of 3.80/5 and 4.54/5 for visceral and subcutaneous fat segmentation in MR images*.Segmentation is a prerequisite yet challenging task for medical image analysis. In this paper, we introduce a novel deeply supervised active learning approach for finger bones segmentation. The proposed architecture is fine-tuned in an iterative and incremental learning manner. In each step, the deep supervision mechanism guides the learning process of hidden layers and selects samples to be labeled. Extensive experiments demonstrated that our method achieves competitive segmentation results using less labeled samples as compared with full annotation.Clinical relevance- The proposed method only needs a few annotated samples on the finger bones task to achieve comparable results in comparison with full annotation, which can be used to segment finger bones for medical practices, and generalized into other clinical applications.Semantic segmentation is a fundamental and challenging problem in medical image analysis. At present, deep convolutional neural network plays a dominant role in medical image segmentation. The existing problems of this field are making less use of image information and learning few edge features, which may lead to the ambiguous boundary and inhomogeneous intensity distribution of the result. Since the characteristics of different stages are highly inconsistent, these two cannot be directly combined. In this paper, we proposed the Attention and Edge Constraint Network (AEC-Net) to optimize features by introducing attention mechanisms in the lower-level features, so that it can be better combined with higher-level features. Meanwhile, an edge branch is added to the network which can learn edge and texture features simultaneously. We evaluated this model on three datasets, including skin cancer segmentation, vessel segmentation, and lung segmentation. Results demonstrate that the proposed model has achieved state-of-the-art performance on all datasets.Convolutional neural networks (CNNs) have been widely used in medical image segmentation. Vessel segmentation in coronary angiography remains a challenging task. selleckchem It is a great challenge to extract fine features of coronary artery for segmentation due to the poor opacification, numerous overlap of different artery segments and high similarity between artery segments and soft tissues in an angiography image, which results in a sub-optimal segmentation performance. In this paper, we propose an adapted generative adversarial networks (GANs) to complete the conversion from coronary angiography image to semantic segmentation image. We implemented an adapted U-net as the generator, and a novel 3-layer pyramid structure as the discriminator. During the training period, multi-scale inputs were fed into the discriminator to optimize the objective functions, producing high-definition segmentation results. Due to the generative adversarial mechanism, both generator and discriminator can extract fine feature of coronary artery. Our method effectively solves the problems of segmentation discontinuity and intra-class inconsistencies. Experiment shows that our method improves the segmentation accuracy effectively comparing to other vessel segmentation methods.Computed tomography (CT) and magnetic resonance imaging (MRI) scanners measure three-dimensional (3D) images of patients. However, only low-dimensional local two-dimensional (2D) images may be obtained during surgery or radiotherapy. Although computer vision techniques have shown that 3D shapes can be estimated from multiple 2D images, shape reconstruction from a single 2D image such as an endoscopic image or an X-ray image remains a challenge. In this study, we propose X-ray2Shape, which permits a deep learning-based 3D organ mesh to be reconstructed from a single 2D projection image. The method learns the mesh deformation from a mean template and deep features computed from the individual projection images. Experiments with organ meshes and digitally reconstructed radiograph (DRR) images of abdominal regions were performed to confirm the estimation performance of the methods.Glaucoma is the second leading cause of blindness globally. Stereophotogrammetry-based optic nerve head topographical imaging systems could potentially allow for objective glaucoma assessment in settings where technologies such as optical coherence tomography and the Heidelberg Retinal Tomograph are prohibitively expensive. In the development of such systems, eye phantoms are invaluable tools for both system calibration and performance evaluation. Eye phantoms developed for this purpose need to replicate the optical configuration of the eye, the related causes of measurement artefacts, and give the possibility to present to the imaging system the targets required for system calibration. The phantoms in the literature that show promise of meeting these requirements rely on custom lenses to be fabricated, making them very costly. Here, we propose a low-cost eye phantom comprising a vacuum formed cornea and commercially available stock bi-convex lens, that is optically similar to a gold-standard reference wide-angle schematic eye model and meets all the compliance and configurability requirements for use with stereo-photogrammetry-based ONH topographical imaging systems. Moreover, its modular design, being fabricated largely from 3D-printed components, lends itself to modification for other applications. The use of the phantom is successfully demonstrated in an ONH imager.In this study we develop a proof of concept of using generative adversarial neural networks in hyperspectral skin cancer imagery production. Generative adversarial neural network is a neural network, where two neural networks compete. The generator tries to produce data that is similar to the measured data, and the discriminator tries to correctly classify the data as fake or real. This is a reinforcement learning model, where both models get reinforcement based on their performance. In the training of the discriminator we use data measured from skin cancer patients. The aim for the study is to develop a generator for augmenting hyperspectral skin cancer imagery.

Autoři článku: Catesfoley4392 (Chaney Rivers)