Birdhendriksen1517
The results demonstrate that the proposed approach could improve the performance for both DR severity diagnosis and DR related feature detection when comparing with the traditional deep learning-based methods. It achieves performance close to general ophthalmologists with five years of experience when diagnosing DR severity levels, and general ophthalmologists with ten years of experience for referable DR detection.The emergence of novel COVID-19 is causing an overload on public health sector and a high fatality rate. The key priority is to contain the epidemic and reduce the infection rate. It is imperative to stress on ensuring extreme social distancing of the entire population and hence slowing down the epidemic spread. So, there is a need for an efficient optimizer algorithm that can solve NP-hard in addition to applied optimization problems. This article first proposes a novel COVID-19 optimizer Algorithm (CVA) to cover almost all feasible regions of the optimization problems. We also simulate the coronavirus distribution process in several countries around the globe. Then, we model a coronavirus distribution process as an optimization problem to minimize the number of COVID-19 infected countries and hence slow down the epidemic spread. Furthermore, we propose three scenarios to solve the optimization problem using most effective factors in the distribution process. Simulation results show one of the controlling scenarios outperforms the others. Extensive simulations using several optimization schemes show that the CVA technique performs best with up to 15%, 37%, 53% and 59% increase compared with Volcano Eruption Algorithm (VEA), Gray Wolf Optimizer (GWO), Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), respectively.Fast and accurate diagnosis is essential for the efficient and effective control of the COVID-19 pandemic that is currently disrupting the whole world. Despite the prevalence of the COVID-19 outbreak, relatively few diagnostic images are openly available to develop automatic diagnosis algorithms. Traditional deep learning methods often struggle when data is highly unbalanced with many cases in one class and only a few cases in another; new methods must be developed to overcome this challenge. We propose a novel activation function based on the generalized extreme value (GEV) distribution from extreme value theory, which improves performance over the traditional sigmoid activation function when one class significantly outweighs the other. We demonstrate the proposed activation function on a publicly available dataset and externally validate on a dataset consisting of 1,909 healthy chest X-rays and 84 COVID-19 X-rays. The proposed method achieves an improved area under the receiver operating characteristic (DeLong's p-value less then 0.05) compared to the sigmoid activation. Our method is also demonstrated on a dataset of healthy and pneumonia vs. COVID-19 X-rays and a set of computerized tomography images, achieving improved sensitivity. The proposed GEV activation function significantly improves upon the previously used sigmoid activation for binary classification. This new paradigm is expected to play a significant role in the fight against COVID-19 and other diseases, with relatively few training cases available.A detector based only on RR intervals capable of classifying other tachyarrhythmias in addition to atrial fibrillation (AF) could improve cardiac monitoring. In this paper a new classification method based in a 2D non-linear RRI dynamics representation is presented. For this aim, the concepts of Poincar Images and Atlases are introduced. Three cardiac rhythms were targeted Normal sinus rhythm (NSR), AF and atrial bigeminy (AB). Three Physionet open source databases were used. Poincar images were generated for all signals using different Poincar plot configurations RR, dRR and RRdRR. The study was computed for different time window lengths and bin sizes. For each rhythm, 80% of the Poincar Images were used to create a reference rhythm image, a Poincar atlas. The remaining 20% patients were classified into one of the three rhythms using normalized mutual information and 2D correlation. The process was iterated in a tenfold cross-validation and patient-wise dataset division. Sensitivity results obtained for RRdRR configuration and bin size 40 ms, for a 60 s time window 94.35%3.68, 82.07%9.18 and 88.86%12.79 with a specificity of 85.52%7.46, 95.91%3.14, 96.10%2.25 for AF, NSR and AB respectively. Results suggest that a rhythm's general RRI pattern may be captured using Poincar Atlases and that these can be used to classify other signal segments using Poincar Images. In contrast with other studies, the former method could be generalized to more cardiac rhythms and does not depend on rhythm-specific thresholds.Machine learning and especially deep learning techniques are dominating medical image and data analysis. This article reviews machine learning approaches proposed for diagnosing ophthalmic diseases during the last four years. Three diseases are addressed in this survey, namely diabetic retinopathy, age-related macular degeneration, and glaucoma. The review covers over 60 publications and 25 public datasets and challenges related to the detection, grading, and lesion segmentation of the three considered diseases. Each section provides a summary of the public datasets and challenges related to each pathology and the current methods that have been applied to the problem. Furthermore, the recent machine learning approaches used for retinal vessels segmentation, and methods of retinal layers and fluid segmentation are reviewed. Two main imaging modalities are considered in this survey, namely color fundus imaging, and optical coherence tomography. Machine learning approaches that use eye measurements and visual field data for glaucoma detection are also included in the survey. Finally, the authors provide their views, expectations and the limitations of the future of these techniques in the clinical practice.Image classification using convolutional neural networks (CNNs) outperforms other state-of-the-art methods. Moreover, attention can be visualized as a heatmap to improve the explainability of results of a CNN. We designed a framework that can generate heatmaps reflecting lesion regions precisely. We generated initial heatmaps by using a gradient-based classification activation map (Grad-CAM). We assume that these Grad-CAM heatmaps correctly reveal the lesion regions; then we apply the attention mining technique to these heatmaps to obtain integrated heatmaps. Moreover, we assume that these Grad-CAM heatmaps incorrectly reveal the lesion regions and design a dissimilarity loss to increase their discrepancy with the Grad-CAM heatmaps. In this study, we found that having professional ophthalmologists select 30% of the heatmaps covering the lesion regions led to better results, because this step integrates (prior) clinical knowledge into the system. Furthermore, we design a knowledge preservation loss that minimizes the discrepancy between heatmaps generated from the updated CNN model and the selected heatmaps. Experiments using fundus images revealed that our method improved classification accuracy and generated attention regions closer to the ground truth lesion regions in comparison with existing methods.Auditory localization of spatial sound sources is an important life skill for human beings. For the practical application-oriented measurement of auditory localization ability, the preference is a compromise among (i) data accuracy, (ii) the maneuverability of collecting directions, and (iii) the cost of hardware and software. The graphical user interface (GUI)-based sound-localization experimental platform proposed here (i) is cheap, (ii) can be operated autonomously by the listener, (iii) can store results online, and (iv) supports real or virtual sound sources. To evaluate the accuracy of this method, by using 12 loudspeakers arranged in equal azimuthal intervals of 30 in the horizontal plane, three groups of azimuthal localization experiments are conducted in the horizontal plane with subjects with normal hearing. In these experiments, the azimuths are reported using (i) an assistant, (ii) a motion tracker, or (iii) the newly designed GUI-based method. All three groups of results show that the localization errors are mostly within 512, which is consistent with previous results from different localization experiments. Finally, the stimulus of virtual sound sources is integrated into the GUI-based experimental platform. The results with the virtual sources suggest that using individualized head-related transfer functions can achieve better performance in spatial sound-source localization, which is consistent with previous conclusions and further validates the reliability of this experimental platform.Blood vessel segmentation in fundus images is a critical procedure in the diagnosis of ophthalmic diseases. Recent deep learning methods achieve high accuracy in vessel segmentation but still face the challenge to segment the microvascular and detect the vessel boundary. This is due to the fact that common Convolutional Neural Networks (CNN) are unable to preserve rich spatial information and a large receptive field simultaneously. Besides, CNN models for vessel segmentation usually are trained by equal pixel level cross-entropy loss, which tend to miss fine vessel structures. In this paper, we propose a novel Context Spatial U-Net (CSU-Net) for blood vessel segmentation. Compared with the other U-Net based models, we design a two-channel encoder a context channel with multi-scale convolution to capture more receptive field and a spatial channel with large kernel to retain spatial information. Also, to combine and strengthen the features extracted from two paths, we introduce a feature fusion module (FFM) and an attention skip module (ASM). Furthermore, we propose a structure loss, which adds a spatial weight to cross-entropy loss and guide the network to focus more on the thin vessels and boundaries. We evaluated this model on three public datasets DRIVE, CHASE-DB1 and STARE. The results show that the CSU-Net achieves higher segmentation accuracy than the current state-of-the-art methods.Speech assessment is an important part of the rehabilitation process for patients with aphasia (PWA). Mandarin speech lucidity features such as articulation, fluency, and tone influence the meaning of the spoken utterance and overall speech clarity. Automatic assessment of these features is important for an efficient assessment of the aphasic speech. Hence, in this paper, a standardized automatic speech lucidity assessment method for Mandarin-speaking aphasic patients using a machine learning based technique is presented. The proposed assessment method adopts the Chinese Rehabilitation Research Center Aphasia Examination (CRRCAE) standard as a guideline. Quadrature based high-resolution time-frequency images with a convolutional neural network (CNN) are utilized to develop a method that can map the relationship between the severity level of aphasic patients' speech and the three speech lucidity features. The results show a linear relationship with statistically significant correlations between the normalized true-class output activations (TCOA) of the CNN model and patients' articulation, fluency, and tone scores, i.