Aagaardkvist0967
Using patient AFB video, 99.5%/90.2% of test frames were correctly labeled as informative/uninformative by our method versus 99.2%/47.6% by ResNet. In addition, ≥97% of lesion frames were correctly identified, with false positive and false negative rates ≤3%.Clinical relevance-The method makes AFB-based bronchial lesion analysis more efficient, thereby helping to advance the goal of better early lung cancer detection.The introduction of deep learning techniques for the computer-aided detection scheme has shed a light for real incorporation into the clinical workflow. In this work, we focus on the effect of attention in deep neural networks on the classification of tuberculosis x-ray images. We propose a Convolutional Block Attention Module (CBAM), a simple but effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module infers attention maps and multiplied it to the input feature map for adaptive feature refinement. It achieves high precision and recalls while localizing objects with its attention. We validate the performance of our approach on a standard-compliant data set, including a dataset of 4990 x-ray chest radiographs from three hospitals and show that our performance is better than the models used in previous work.This paper proposes an automatic method for classifying Aortic valvular stenosis (AS) using ECG (Electrocardiogram) images by the deep learning whose training ECG images are annotated by the diagnoses given by the medical doctor who observes the echocardiograms. Besides, it explores the relationship between the trained deep learning network and its determinations, using the Grad-CAM.In this study, one-beat ECG images for 12-leads and 4-leads are generated from ECG's and train CNN's (Convolutional neural network). By applying the Grad-CAM to the trained CNN's, feature areas are detected in the early time range of the one-beat ECG image. Also, by limiting the time range of the ECG image to that of the feature area, the CNN for the 4-lead achieves the best classification performance, which is close to expert medical doctors' diagnoses.Clinical Relevance-This paper achieves as high AS classification performance as medical doctors' diagnoses based on echocardiograms by proposing an automatic method for detecting AS only using ECG.Nowadays, cancer has become a major threat to people's lives and health. Convolutional neural network (CNN) has been used for cancer early identification, which cannot achieve the desired results in some cases, such as images with affine transformation. Due to robustness to rotation and affine transformation, capsule network can effectively solve this problem of CNN and achieve the expected performance with less training data, which are very important for medical image analysis. In this paper, an enhanced capsule network is proposed for medical image classification. selleck For the proposed capsule network, the feature decomposition module and multi-scale feature extraction module are introduced into the basic capsule network. The feature decomposition module is presented to extract richer features, which reduces the amount of calculation and speeds up the network convergence. The multi-scale feature extraction module is used to extract important information in the low-level capsules, which guarantees the extracted features to be transmitted to the high-level capsules. The proposed capsule network was applied on PatchCamelyon (PCam) dataset. Experimental results show that it can obtain good performance for medical image classification task, which provides good inspiration for other image classification tasks.This paper proposes a new method for automatic detection of glaucoma from stereo pair of fundus images. The basis for detecting glaucoma is using the optic cup-to-disc area ratio, where the surface area of the optic cup is segmented from the disparity map estimated from the stereo fundus image pair. More specifically, we first estimate the disparity map from the stereo image pair. Then, the optic disc is segmented from one of the stereo image. Based upon the location of the optic disc, we perform an active contour segmentation on the disparity map to segment the optic cup. Thereafter, we can compute the optic cup-to-disc area ratio by dividing the area (i.e. the total number of pixels) of the segmented optic cup region to that of the segmented optic disc region. Our experimental results using the available test dataset shows the efficacy of our proposed approach.Semi-automatic measurements are performed on 18FDG PET-CT images to monitor the evolution of metastatic sites in the clinical follow-up of metastatic breast cancer patients. Apart from being time-consuming and prone to subjective approximation, semi-automatic tools cannot make the difference between cancerous regions and active organs, presenting a high 18FDG uptake.In this work, we combine a deep learning-based approach with a superpixel segmentation method to segment the main active organs (brain, heart, bladder) from full-body PET images. In particular, we integrate a superpixel SLIC algorithm at different levels of a convolutional network. Results are compared with a deep learning segmentation network alone. The methods are cross-validated on full-body PET images of 36 patients and tested on the acquisitions of 24 patients from a different study center, in the context of the ongoing EPICUREseinmeta study. The similarity between the manually defined organ masks and the results is evaluated with the Dice score. Moreover, the amount of false positives is evaluated through the positive predictive value (PPV).According to the computed Dice scores, all approaches allow to accurately segment the target organs. However, the networks integrating superpixels are better suited to transfer knowledge across datasets acquired on multiple sites (domain adaptation) and are less likely to segment structures outside of the target organs, according to the PPV.Hence, combining deep learning with superpixels allows to segment organs presenting a high 18FDG uptake on PET images without selecting cancerous lesion, and thus improves the precision of the semi-automatic tools monitoring the evolution of breast cancer metastasis.Clinical relevance- We demonstrate the utility of combining deep learning and superpixel segmentation methods to accurately find the contours of active organs from metastatic breast cancer images, to different dataset distributions.18FDG PET/CT imaging is commonly used in diagnosis and follow-up of metastatic breast cancer, but its quantitative analysis is complicated by the number and location heterogeneity of metastatic lesions. Considering that bones are the most common location among metastatic sites, this work aims to compare different approaches to segment the bones and bone metastatic lesions in breast cancer.Two deep learning methods based on U-Net were developed and trained to segment either both bones and bone lesions or bone lesions alone on PET/CT images. These methods were cross-validated on 24 patients from the prospective EPICUREseinmeta metastatic breast cancer study and were evaluated using recall and precision to measure lesion detection, as well as the Dice score to assess bones and bone lesions segmentation accuracy.Results show that taking into account bone information in the training process allows to improve the precision of the lesions detection as well as the Dice score of the segmented lesions. Moreover, using the obtained bone and bone lesion masks, we were able to compute a PET bone index (PBI) inspired by the recognized Bone Scan Index (BSI). This automatically computed PBI globally agrees with the one calculated from ground truth delineations.Clinical relevance- We propose a completely automatic deep learning based method to detect and segment bones and bone lesions on 18FDG PET/CT in the context of metastatic breast cancer. We also introduce an automatic PET bone index which could be incorporated in the monitoring and decision process.Raynaud's phenomenon (RP) is a disease characterized by a transient ischemic process, in an exaggerated vascular response to cold or emotional stress. Thermography is a resource applied to support diagnosis of changes in the circulatory system. The aim of the study was to use the DistalDorsal Thermography Difference (DDD) in thermographic images to assess thermal behavior in individuals with secondary RP. The research was carried out in the period between 2018 and 2019. The sample means of the Distal-consisted of 44 individuals in a control group (Control) and 44 individuals in a pathological group (RP2). The participants, after acclimatization, were submitted to the cold stress protocol. The protocol consisted of immersing hands in a container of water at a temperature of 15°C for 60 seconds. The acquisition of thermographic images was performed at the pre-test moment and at the 1st, 3rd, 5th, 7th, 10th and 15th minute. At each time, the DDD values (of all fingers - minimum, maximum and sum) between the groups were analyzed. For statistical analysis, the independent t test and Cohen's d test were used. Regarding the results, there was a difference in relation to the rate of temperature recovery between the groups. The first group showed a rate of reheating just after the first minute subsequent to the cold stress test, while the RP2 group was unable to recover the temperature over 15 minutes. DDD, regardless of the selected criterion, proved to be a valid index for verifying the temperature gradient in the study with individuals identified with secondary RP.Developing a fast and accurate classifier is an important part of a computer-aided diagnosis system for skin cancer. Melanoma is the most dangerous form of skin cancer which has a high mortality rate. Early detection and prognosis of melanoma can improve survival rates. In this paper, we propose a deep convolutional neural network for automated melanoma detection that is scalable to accommodate a variety of hardware and software constraints. Dermoscopic skin images collected from open sources were used for training the network. The trained network was then tested on a dataset of 2150 malignant or benign images. Overall, the classifier achieved high average values for accuracy, sensitivity, and specificity of 82.95%, 82.99%, and 83.89% respectively. It outperfomed other exisitng networks using the same dataset.Multiparametric magnetic resonance (mpMR) images are increasingly being used for diagnosis and monitoring of prostate cancer. Detection of malignancy from prostate mpMR images requires expertise, is time consuming and prone to human error. The recent developments of U-net have demonstrated promising detection results in many medical applications. Straightforward use of U-net tends to result in over-detection in mpMR images. The recently developed attention mechanism can help retain only features relevant for malignancy detection, thus improving the detection accuracy. In this work, we propose a U-net architecture that is enhanced by the attention mechanism to detect malignancy in prostate mpMR images. This approach resulted in improved performance in terms of higher Dice score and reduced over-detection when compared to U-net in detecting malignancy.