Doddholman0512
Second, we build an encoding network to obtain edge information from NIR images. Finally, we combine all features and feed them into a decoding network for fusion. Experimental results demonstrate that the proposed fusion network produces visually pleasing results with fine details, little noise, and natural color and it is superior to state-of-the-art methods in terms of visual quality and quantitative measurements.The design of optimal control laws for nonlinear systems is tackled without knowledge of the underlying plant and of a functional description of the cost function. The proposed data-driven method is based only on real-time measurements of the state of the plant and of the (instantaneous) value of the reward signal and relies on a combination of ideas borrowed from the theories of optimal and adaptive control problems. As a result, the architecture implements a policy iteration strategy in which, hinging on the use of neural networks, the policy evaluation step and the computation of the relevant information instrumental for the policy improvement step are performed in a purely continuous-time fashion. Furthermore, the desirable features of the design method, including convergence rate and robustness properties, are discussed. Finally, the theory is validated via two benchmark numerical simulations.In spite of achieving promising results in hyperspectral image (HSI) restoration, deep-learning-based methodologies still face the problem of spectral or spatial information loss due to neglecting the inner correlation of HSI. To address this issue, we propose an innovative deep recurrent convolution neural network (DnRCNN) model for HSI destriping. To the best of our knowledge, this is the first study on HSI destriping from the perspective of inner band and interband correlation explorations with the recurrent convolution neural network. In the novel DnRCNN, a selective recurrent memory unit (SRMU) is designed to respectively extract the correlative features involved in spectral and spatial domains. Moreover, an innovative recurrent fusion (RF) strategy incorporated with group concatenation is further proposed to remove strip noise and preserve scene details using the complementary features from SRMU. Experimental results on extensive HSI datasets validated that the proposed method achieves a new state-of-the-art (SOTA) HSI destriping performance.Single cell RNA sequencing (scRNA-seq) provides a powerful approach for profiling transcriptomes at single cell resolution. Currently, existing single cell clustering methods are exclusively based on gene-level expression data, without considering alternative splicing information. We therefore hypothesize that adding information about alternative splicing may help enhance single cell clustering. This motivates us to develop a way to integrate isoform-level expression and gene-level expression. We report an approach to enhance single cell clustering by integrating isoform-level expression through orthogonal projection. First, we construct an orthogonal projection matrix based on gene expression data. Second, isoforms are projected to the gene space to remove the redundant information between them. Third, isoform selection is performed based on the residual of the projected expression and the selected isoforms are combined with gene expression data for subsequent clustering. We applied our method to sixteen scRNA-seq datasets. We find that alternative splicing contains differential information among cell types and can be integrated to enhance single cell clustering. Compared with using only gene-level expression data, the integration of isoform-level expression leads to better clustering performances for most of the datasets. The integration of isoform-level expression also has potential in the detection of novel cell subgroups.An accurate estimation of glomerular filtration rate (GFR) is clinically crucial for kidney disease diagnosis and predicting the prognosis of chronic kidney disease (CKD). Machine learning methodologies such as deep neural networks provide a potential avenue for increasing accuracy in GFR estimation. We developed a novel deep learning architecture, a deep and shallow neural network, to estimate GFR (dlGFR for short) and examined its comparative performance with estimated GFR from Modification of Diet in Renal Disease (MDRD) and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations. The dlGFR model jointly trains a shallow learning model and a deep neural network to enable both linear transformation from input features to a log GFR target, and non-linear feature embedding for stage of kidney function classification. We validate the proposed methods on the data from multiple studies obtained from the NIDDK Central Database Repository. The deep learning model predicted values of GFR within 30% of measured GFR with 88.3% accuracy, compared to the 87.1% and 84.7% of the accuracy achieved by CKD-EPI and MDRD equations (p = 0.051 and p less then 0.001, respectively). Our results suggest that deep learning methods are superior to equations resulting from traditional statistical methods in estimating glomerular filtration rate. Based on these results, an end-to-end predication system has been deployed to facilitate use of the proposed dlGFR algorithm.Many upper-limb prostheses lack proper wrist rotation functionality, leading to users performing poor compensatory strategies, leading to overuse or abandonment. In this study, we investigate the validity of creating and implementing a data-driven predictive control strategy in object grasping tasks performed in virtual reality. We propose the idea of using gaze-centered vision to predict the wrist rotations of a user and implement a user study to investigate the impact of using this predictive control. We demonstrate that using this vision-based predictive system leads to a decrease in compensatory movement in the shoulder, as well as task completion time. We discuss the cases in which the virtual prosthesis with the predictive model implemented did and did not make a physical improvement in various arm movements. We also discuss the cognitive value in implementing such predictive control strategies into prosthetic controllers. We find that gaze-centered vision provides information about the intent of the user when performing object reaching and that the performance of prosthetic hands improves greatly when wrist prediction is implemented. Lastly, we address the limitations of this study in the context of both the study itself as well as any future physical implementations.Deep object detection models trained on clean images may not generalize well on degraded images due to the well-known domain shift issue. This hinders their application in real-life scenarios such as video surveillance and autonomous driving. Though domain adaptation methods can adapt the detection model from a labeled source domain to an unlabeled target domain, they struggle in dealing with open and compound degradation types. In this paper, we attempt to address this problem in the context of object detection by proposing a robust object Detector via Adversarial Novel Style Exploration (DANSE). Technically, DANSE first disentangles images into domain-irrelevant content representation and domain-specific style representation under an adversarial learning framework. Then, it explores the style space to discover diverse novel degradation styles that are complementary to those of the target domain images by leveraging a novelty regularizer and a diversity regularizer. The clean source domain images are transferred into these discovered styles by using a content-preserving regularizer to ensure realism. These transferred source domain images are combined with the target domain images and used to train a robust degradation-agnostic object detection model via adversarial domain adaptation. Experiments on both synthetic and real benchmark scenarios confirm the superiority of DANSE over state-of-the-art methods.Video Summarization (VS) has become one of the most effective solutions for quickly understanding a large volume of video data. Dictionary selection with self representation and sparse regularization has demonstrated its promise for VS by formulating the VS problem as a sparse selection task on video frames. However, existing dictionary selection models are generally designed only for data reconstruction, which results in the neglect of the inherent structured information among video frames. In addition, the sparsity commonly constrained by L2,1 norm is not strong enough, which causes the redundancy of keyframes, i.e., similar keyframes are selected. Therefore, to address these two issues, in this paper we propose a general framework called graph convolutional dictionary selection with L2,p ( ) norm (GCDS 2,p ) for both keyframe selection and skimming based summarization. Firstly, we incorporate graph embedding into dictionary selection to generate the graph embedding dictionary, which can take the structured information depicted in videos into account. Secondly, we propose to use L2,p ( ) norm constrained row sparsity, in which p can be flexibly set for two forms of video summarization. For keyframe selection, can be utilized to select diverse and representative keyframes; and for skimming, p=1 can be utilized to select key shots. In addition, an efficient iterative algorithm is devised to optimize the proposed model, and the convergence is theoretically proved. Experimental results including both keyframe selection and skimming based summarization on four benchmark datasets demonstrate the effectiveness and superiority of the proposed method.Common representations of light fields use four-dimensional data structures, where a given pixel is closely related not only to its spatial neighbours within the same view, but also to its angular neighbours, co-located in adjacent views. Such structure presents increased redundancy between pixels, when compared with regular single-view images. Then, these redundancies are exploited to obtain compressed representations of the light field, using prediction algorithms specifically tailored to estimate pixel values based on both spatial and angular references. S63845 mw This paper proposes new encoding schemes which take advantage of the four-dimensional light field data structures to improve the coding performance of Minimum Rate Predictors. The proposed methods expand previous research on lossless coding beyond the current state-of-the-art. The experimental results, obtained using both traditional datasets and others more challenging, show bit-rate savings no smaller than 10%, when compared with existing methods for lossless light field compression.Existing Quality Assessment (QA) algorithms consider identifying "black-holes" to assess perceptual quality of 3D-synthesized views. However, advancements in rendering and inpainting techniques have made black-hole artifacts near obsolete. Further, 3D-synthesized views frequently suffer from stretching artifacts due to occlusion that in turn affect perceptual quality. Existing QA algorithms are found to be inefficient in identifying these artifacts, as has been seen by their performance on the IETR dataset. We found, empirically, that there is a relationship between the number of blocks with stretching artifacts in view and the overall perceptual quality. Building on this observation, we propose a Convolutional Neural Network (CNN) based algorithm that identifies the blocks with stretching artifacts and incorporates the number of blocks with the stretching artifacts to predict the quality of 3D-synthesized views. To address the challenge with existing 3D-synthesized views dataset, which has few samples, we collect images from other related datasets to increase the sample size and increase generalization while training our proposed CNN-based algorithm.