Mccarthycherry9351
This automatically computed PBI globally agrees with the one calculated from ground truth delineations.Clinical relevance- We propose a completely automatic deep learning based method to detect and segment bones and bone lesions on 18FDG PET/CT in the context of metastatic breast cancer. We also introduce an automatic PET bone index which could be incorporated in the monitoring and decision process.Raynaud's phenomenon (RP) is a disease characterized by a transient ischemic process, in an exaggerated vascular response to cold or emotional stress. Thermography is a resource applied to support diagnosis of changes in the circulatory system. The aim of the study was to use the DistalDorsal Thermography Difference (DDD) in thermographic images to assess thermal behavior in individuals with secondary RP. The research was carried out in the period between 2018 and 2019. The sample means of the Distal-consisted of 44 individuals in a control group (Control) and 44 individuals in a pathological group (RP2). The participants, after acclimatization, were submitted to the cold stress protocol. The protocol consisted of immersing hands in a container of water at a temperature of 15°C for 60 seconds. The acquisition of thermographic images was performed at the pre-test moment and at the 1st, 3rd, 5th, 7th, 10th and 15th minute. At each time, the DDD values (of all fingers - minimum, maximum and sum) between the groups were analyzed. For statistical analysis, the independent t test and Cohen's d test were used. Regarding the results, there was a difference in relation to the rate of temperature recovery between the groups. The first group showed a rate of reheating just after the first minute subsequent to the cold stress test, while the RP2 group was unable to recover the temperature over 15 minutes. DDD, regardless of the selected criterion, proved to be a valid index for verifying the temperature gradient in the study with individuals identified with secondary RP.Developing a fast and accurate classifier is an important part of a computer-aided diagnosis system for skin cancer. Melanoma is the most dangerous form of skin cancer which has a high mortality rate. Early detection and prognosis of melanoma can improve survival rates. learn more In this paper, we propose a deep convolutional neural network for automated melanoma detection that is scalable to accommodate a variety of hardware and software constraints. Dermoscopic skin images collected from open sources were used for training the network. The trained network was then tested on a dataset of 2150 malignant or benign images. Overall, the classifier achieved high average values for accuracy, sensitivity, and specificity of 82.95%, 82.99%, and 83.89% respectively. It outperfomed other exisitng networks using the same dataset.Multiparametric magnetic resonance (mpMR) images are increasingly being used for diagnosis and monitoring of prostate cancer. Detection of malignancy from prostate mpMR images requires expertise, is time consuming and prone to human error. The recent developments of U-net have demonstrated promising detection results in many medical applications. Straightforward use of U-net tends to result in over-detection in mpMR images. The recently developed attention mechanism can help retain only features relevant for malignancy detection, thus improving the detection accuracy. In this work, we propose a U-net architecture that is enhanced by the attention mechanism to detect malignancy in prostate mpMR images. This approach resulted in improved performance in terms of higher Dice score and reduced over-detection when compared to U-net in detecting malignancy.Brain insults such as cerebral ischemia and intracranial hemorrhage are critical stroke conditions with high mortality rates. Currently, medical image analysis for critical stroke conditions is still largely done manually, which is time-consuming and labor-intensive. While deep learning algorithms are increasingly being applied in medical image analysis, the performance of these methods still needs substantial improvement before they can be widely used in the clinical setting. Among other challenges, the lack of sufficient labelled data is one of the key problems that has limited the progress of deep learning methods in this domain. To mitigate this bottleneck, we propose an integrated method that includes a data augmentation framework using a conditional Generative Adversarial Network (cGAN) which is followed by a supervised segmentation with a Convolutional Neural Network (CNN). The adopted cGAN generates meaningful brain images from specially altered lesion masks as a form of data augmentation to supplement the training dataset, while the CNN incorporates depth-wise-convolution based X-blocks as well as Feature Similarity Module (FSM) to ease and aid the training process, resulting in better lesion segmentation. We evaluate the proposed deep learning strategy on the Anatomical Tracings of Lesions After Stroke (ATLAS) dataset and show that this approach outperforms the current state-of-art methods in task of stroke lesion segmentation.The patient-clinician relationship is known to significantly affect the pain experience, as empathy, mutual trust and therapeutic alliance can significantly modulate pain perception and influence clinical therapy outcomes. The aim of the present study was to use an EEG hyperscanning setup to identify brain and behavioral mechanisms supporting the patient-clinician relationship while this clinical dyad is engaged in a therapeutic interaction. Our previous study applied fMRI hyperscanning to investigate whether brain concordance is linked with analgesia experienced by a patient while undergoing treatment by the clinician. In this current hyperscanning project we investigated similar outcomes for the patient-clinician dyad exploiting the high temporal resolution of EEG and the possibility to acquire the signals while patients and clinicians were present in the same room and engaged in a face-to-face interaction under an experimentally-controlled therapeutic context. Advanced source localization methods allowed for integration of spatial and spectral information in order to assess brain correlates of therapeutic alliance and pain perception in different clinical interaction contexts.