Zhaoelgaard9065
Symmetry can be defined as uniformity, equivalence or exact similarity of two parts divided along an axis. While our left and right eyes clearly have a high degree of external bilateral symmetry, it is less obvious to what degree they have internal bilateral symmetry. In this paper, we try to find approximate-bilateral symmetry in retina, one of the internal parts of our eye, which plays a vital role in our vision and also can be used as a powerful biometric. Contrary to previous works, we study interretinal symmetry from a biometric perspective. In other words, we study whether the left and right retinal symmetry is strong enough to reliably tell whether a pair of the left and right retinas belongs to a single person. For this, we focus on overall symmetry of the retinas rather than specific attributes such as length, area, thickness, or the number of blood vessels. We evaluate and analyse the performance of both human and neural network based bilateral retina verification on fundus photographs. By experimenting on a publicly available data set, we confirm interretinal symmetry.In this paper, we proposed and validated a probability distribution guided network for segmenting optic disc (OD) and optic cup (OC) from fundus images. Uncertainty is inevitable in deep learning, as induced by different sensors, insufficient samples, and inaccurate labeling. Since the input data and the corresponding ground truth label may be inaccurate, they may actually follow some potential distribution. see more In this study, a variational autoencoder (VAE) based network was proposed to estimate the joint distribution of the input image and the corresponding segmentation (both the ground truth segmentation and the predicted segmentation), making the segmentation network learn not only pixel-wise information but also semantic probability distribution. Moreover, we designed a building block, namely the Dilated Inception Block (DIB), for a better generalization of the model and a more effective extraction of multi-scale features. The proposed method was compared to several existing state-of-the-art methods. Superior segmentation performance has been observed over two datasets (ORIGA and REFUGE), with the mean Dice overlap coefficients being 96.57% and 95.81% for OD and 88.46% and 88.91% for OC.Local drug delivery to the inner ear via micropump implants has the potential to be much more effective than oral drug delivery for treating patients with sensorineural hearing loss and to protect hearing from ototoxic insult due to noise exposure or cancer treatments. Designing micropumps to deliver appropriate concentrations of drugs to the necessary cochlear compartments is of paramount importance; however, directly measuring local drug concentrations over time throughout the cochlea is not possible. Recent approaches for indirectly quantifying local drug concentrations in animal models capture a series of magnetic resonance (MR) or micro computed tomography (µCT) images before and after infusion of a contrast agent into the cochlea. These approaches require accurately segmenting important cochlear components (scala tympani (ST), scala media (SM) and scala vestibuli (SV)) in each scan and ensuring that they are registered longitudinally across scans. In this paper, we focus on segmenting cochlear compartments from µCT volumes using V-Net, a convolutional neural network (CNN) architecture for 3-D segmentation. We show that by modifying the V-Net architecture to decrease the numbers of encoder and decoder blocks and to use dilated convolutions enables extracting local estimates of drug concentration that are comparable to those extracted using atlas-based segmentation (3.37%, 4.81%, and 19.65% average relative error in ST, SM, and SV), but in a fraction of the time. We also test the feasibility of training our network on a larger MRI dataset, and then using transfer learning to perform segmentation on a smaller number of µCT volumes, which would enable this technique to be used in the future to characterize drug delivery in the cochlea of larger mammals.Diabetic retinopathy (DR) is a medical condition due to diabetes mellitus that can damage the patient retina and cause blood leaks. This condition can cause different symptoms from mild vision problems to complete blindness if it is not timely treated. In this work, we propose the use of a deep learning architecture based on a recent convolutional neural network called EfficientNet to detect referable diabetic retinopathy (RDR) and vision-threatening DR. Tests were conducted on two public datasets, EyePACS and APTOS 2019. The obtained results achieve state-of-the-art performance and show that the proposed network leads to higher classification rates, achieving an Area Under Curve (AUC) of 0.984 for RDR and 0.990 for vision-threatening DR on EyePACS dataset. Similar performances are obtained for APTOS 2019 dataset with an AUC of 0.966 and 0.998 for referable and vision-threatening DR, respectively. An explainability algorithm was also developed and shows the efficiency of the proposed approach in detecting DR signs.Subretinal stimulators help restoring vision to blind people, suffering from degenerative eye diseases. This work aims to reduce patient's efforts to continuously tune his device, by implementing a physiological ambient illumination adaptation system. The parameters of the adaptation to changing illumination conditions are highly customizable, to best fit individual patients requirements.Detailed extraction of retinal vessel morphology is of great significance in many clinical applications. In this paper, we propose a retinal image segmentation method, called MAU-Net, which is based on the U-net structure and takes advantages of both modulated deformable convolution and dual attention modules to realize vessels segmentation. Specifically, based on the classic U-shaped architecture, our network introduces the Modulated Deformable Convolutional (MDC) block as encoding and decoding unit to model vessels with various shapes and deformations. In addition, in order to obtain better feature presentations, we aggregate the outputs of dual attention modules the position attention module (PAM) and channel attention module (CAM). On three publicly available datasets DRIVE, STARE and CHASEDB1, we have achieved superior performance to other algorithms. Quantitative and qualitative experimental results show that our MAU-Net can effectively and accurately accomplish the retinal vessels segmentation task.