Honeycuttnewman7674
With its sequential image acquisition, OCT-based corneal topography is often susceptible to measurement errors due to eye motion. We have developed a novel algorithm to detect eye motion and minimize its impact on OCT topography maps. We applied the eye motion correction algorithm to corneal topographic scans acquired using a 70 kHz spectral-domain OCT device. OCT corneal topographic measurements were compared to those from a rotating Scheimpflug camera topographer. The motion correction algorithm provided a 2-4 fold improvement in the repeatability of OCT topography and its agreement with the standard Scheimpflug topographer. The repeatability of OCT Zernike-based corneal mean power, cardinal astigmatism, and oblique astigmatism after motion detection was 0.14 D, 0.28 D, and 0.24 D, respectively. The average differences between the two devices were 0.19 D for simulated keratometry-based corneal mean power, 0.23 D for cardinal astigmatism, and 0.25 D for oblique astigmatism. Our eye motion detection method can be applied to any OCT device, and it therefore represents a powerful tool for improving OCT topography.Optical coherence tomography angiography (OCTA) is becoming increasingly popular for neuroscientific study, but it remains challenging to objectively quantify angioarchitectural properties from 3D OCTA images. This is mainly due to projection artifacts or "tails" underneath vessels caused by multiple-scattering, as well as the relatively low signal-to-noise ratio compared to fluorescence-based imaging modalities. Here, we propose a set of deep learning approaches based on convolutional neural networks (CNNs) to automated enhancement, segmentation and gap-correction of OCTA images, especially of those obtained from the rodent cortex. Additionally, we present a strategy for skeletonizing the segmented OCTA and extracting the underlying vascular graph, which enables the quantitative assessment of various angioarchitectural properties, including individual vessel lengths and tortuosity. These tools, including the trained CNNs, are made publicly available as a user-friendly toolbox for researchers to input their OCTA images and subsequently receive the underlying vascular network graph with the associated angioarchitectural properties.[This corrects the article on p. 2951 in vol. 11, PMID 32637234.].In circular scan photoacoustic tomography (PAT), the axial resolution is spatially invariant and is limited by the bandwidth of the detector. However, the tangential resolution is spatially variant and is dependent on the aperture size of the detector. In particular, the tangential resolution improves with the decreasing aperture size. LY2880070 However, using a detector with a smaller aperture reduces the sensitivity of the transducer. Thus, large aperture size detectors are widely preferred in circular scan PAT imaging systems. Although several techniques have been proposed to improve the tangential resolution, they have inherent limitations such as high cost and the need for customized detectors. Herein, we propose a novel deep learning architecture to counter the spatially variant tangential resolution in circular scanning PAT imaging systems. We used a fully dense U-Net based convolutional neural network architecture along with 9 residual blocks to improve the tangential resolution of the PAT images. The network was trained on the simulated datasets and its performance was verified by experimental in vivo imaging. Results show that the proposed deep learning network improves the tangential resolution by eight folds, without compromising the structural similarity and quality of image.The data situation of laser-induced damage measurements after multiple-pulse irradiation in the ns-time regime is limited. Since the laser safety standard is based on damage experiments, it is crucial to determine damage thresholds. For a better understanding of the underlying damage mechanism after repetitive irradiation, we generate damage thresholds for pulse sequences up to N = 20 000 with 1.8 ns-pulses using a square-core fiber and a pulsed NdYAG laser. Porcine retinal pigment epithelial layers were used as tissue samples, irradiated with six pulse sequences and evaluated for damage by fluorescence microscopy. The damage thresholds decreased from 31.16 µJ for N = 1 to 11.56 µJ for N = 20 000. The reduction indicates photo-chemical damage mechanisms after reaching a critical energy dose.The current study aims to investigate the effects of micro-lens arrays (MLA) and diffractive optical elements (DOE) on skin tissue via intra-dermal laser-induced optical breakdown (LIOB) after irradiation of 1064-nm picosecond laser light at high energy settings. Irradiation with MLA and DOE was tested on dimming paper, tissue-mimicking phantom, and dark pigmented porcine skin to quantitatively compare distributions of micro-beams, micro-bubbles, and laser-induced vacuoles in the skin. DOE yielded more uniform distributions of the micro-beams on the paper and laser-induced micro-bubbles in the phantom, compared to MLA. The ex vivo skin test confirmed that the DOE-assisted irradiation accompanied more homogeneous generation of the micro-beams on the tissue surface (deviation of ≤ 3%) and a high density of small laser-induced vacuoles (∼78 µm) in the dermis than the MLA-assisted irradiation (deviation of ∼26% and ∼163 µm). The DOE-assisted picosecond laser irradiation may help to achieve deep and uniformly-generated vacuolization under the basal membrane after intra-dermal LIOB for effective fractional skin treatment.Isotropic 3D histological imaging of large biological specimens is highly desired but remains highly challenging to current fluorescence microscopy technique. Here we present a new method, termed deep-learning super-resolution light-sheet add-on microscopy (Deep-SLAM), to enable fast, isotropic light-sheet fluorescence imaging on a conventional wide-field microscope. After integrating a minimized add-on device that transforms an inverted microscope into a 3D light-sheet microscope, we further integrate a deep neural network (DNN) procedure to quickly restore the ambiguous z-reconstructed planes that suffer from still insufficient axial resolution of light-sheet illumination, thereby achieving isotropic 3D imaging of thick biological specimens at single-cell resolution. We apply this easy and cost-effective Deep-SLAM approach to the anatomical imaging of single neurons in a meso-scale mouse brain, demonstrating its potential for readily converting commonly-used commercialized 2D microscopes to high-throughput 3D imaging, which is previously exclusive for high-end microscopy implementations.