Alexanderbyrne3788

Z Iurium Wiki

The resting-state functional magnetic resonance imaging (rs-fMRI) reflects functional activity of brain regions by blood-oxygen-level dependent (BOLD) signals. Up to now, many computer-aided diagnosis methods based on rs-fMRI have been developed for Autism Spectrum Disorder (ASD). These methods are mostly the binary classification approaches to determine whether a subject is an ASD patient or not. However, the disease often consists of several sub-categories, which are complex and thus still confusing to many automatic classification methods. Besides, existing methods usually focus on the functional connectivity (FC) features in grey matter regions, which only account for a small portion of the rs-fMRI data. Recently, the possibility to reveal the connectivity information in the white matter regions of rs-fMRI has drawn high attention. To this end, we propose to use the patch-based functional correlation tensor (PBFCT) features extracted from rs-fMRI in white matter, in addition to the traditional FC features from gray matter, to develop a novel multi-class ASD diagnosis method in this work. Our method has two stages. Specifically, in the first stage of multi-source domain adaptation (MSDA), the source subjects belonging to multiple clinical centers (thus called as source domains) are all transformed into the same target feature space. Thus each subject in the target domain can be linearly reconstructed by the transformed subjects. In the second stage of multi-view sparse representation (MVSR), a multi-view classifier for multi-class ASD diagnosis is developed by jointly using both views of the FC and PBFCT features. The experimental results using the ABIDE dataset verify the effectiveness of our method, which is capable of accurately classifying each subject into a respective ASD sub-category.Histopathological image analysis is a challenging task due to a diverse histology feature set as well as due to the presence of large non-informative regions in whole slide images. In this paper, we propose a multiple-instance learning (MIL) method for image-level classification as well as for annotating relevant regions in the image. In MIL, a common assumption is that negative bags contain only negative instances while positive bags contain one or more positive instances. This asymmetric assumption may be inappropriate for some application scenarios where negative bags also contain representative negative instances. We introduce a novel symmetric MIL framework associating each instance in a bag with an attribute which can be either negative, positive, or irrelevant. We extend the notion of relevance by introducing control over the number of relevant instances. We develop a probabilistic graphical model that incorporates the aforementioned paradigm and a corresponding computationally efficient inference for learning the model parameters and obtaining an instance level attribute-learning classifier. The effectiveness of the proposed method is evaluated on available histopathology datasets with promising results.Recently there has been an increasing interest in the convolution process (CP) to construct multivariate Gaussian processes (MGP) which extend the Gaussian process (GP) to deal with multiple outputs. The CP is based on the idea of sharing latent functions across several convolutions. Despite the elegance of the CP construction, it provides new challenges that need yet to be tackled. First, even with a moderate number of outputs, model building is extremely prohibitive due to the huge increase in computational demands and number of parameters to be estimated. Second, the negative transfer of knowledge may occur when some outputs do not share commonalities. In this paper we address these issues. We propose a regularized pairwise modeling approach for the MGP established using CP. Tivantinib cell line The key feature of our approach is to distribute the estimation of the full multivariate model into a group of bivariate GPs which are individually built. Interestingly pairwise modeling turns out to possess unique characteristics, which allows us to tackle the challenge of negative transfer through penalizing the latent function that facilitates information sharing in each bivariate model. Statistical guarantees are established and the advantageous features of the method are demonstrated through numerical studies.In many real-world scenarios, data from multiple modalities (sources) are collected during a development phase. Such data are referred to as multiview data. While additional information from multiple views often improves the performance, collecting data from such additional views during the testing phase may not be desired due to the high costs associated with measuring such views or, unavailability of such additional views. Therefore, in many applications, despite having a multiview training data set, it is desired to do performance testing using data from only one view. In this paper, we present a multiview feature selection method that leverages the knowledge of all views and use it to guide the feature selection process in an individual view. We realize this via a multiview feature weighting scheme such that the local margins of samples in each view are maximized and similarities of samples to some reference points in different views are preserved. Also, the proposed formulation can be used for cross-view matching when the view-specific feature weights are pre-computed on an auxiliary data set. Promising results have been achieved on nine real-world data sets as well as three biometric recognition applications. On average, the proposed feature selection method has improved the classification error rate by 31% of the error rate of the state-of-the-art.Active illumination is a prominent complement to enhance 2D face recognition and make it more robust, e.g., to spoofing attacks and low-light conditions. In the present work we show that it is possible to adopt active illumination to enhance state-of-the-art 2D face recognition approaches with 3D features, while bypassing the complicated task of 3D reconstruction. The key idea is to project over the test face a high spatial frequency pattern, which allows us to simultaneously recover real 3D information plus a standard 2D facial image. Therefore, state-of-the-art 2D face recognition solution can be transparently applied, while from the high frequency component of the input image, complementary 3D facial features are extracted. Experimental results on ND-2006 dataset show that the proposed ideas can significantly boost face recognition performance and dramatically improve the robustness to spoofing attacks.Spatial resolution is one of the fundamental bottlenecks in the area of time-resolved imaging. Since each pixel measures a scene dependent time-profile, there is a technological limit on the size of pixel arrays that can be simultaneously used to perform measurements. To overcome this barrier, in this paper, we propose a low-complexity, one-bit sensing scheme. On the data capture front, the time-resolved measurements are mapped to a sequence of +1 and -1. This leads to an extremely simple implementation and at the same time poses a new form of information loss. On the image recovery front, our one-bit time-resolved imaging scheme is complemented with a non-iterative recovery algorithm that can handle the case of single and multiple light paths. Extensive computer simulations and physical experiments benchmarked against conventional Time-of-Flight imaging data corroborate our theoretical framework. Thus, our low-complexity alternative to time-resolved imaging can indeed potentially lead to a new imaging methodology.Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors' ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor-processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we demonstrate state-of-the-art performance for HDR and high-speed compressive imaging in simulation and experimentallly with real scenes.Lensless cameras, while extremely useful for imaging in constrained scenarios, struggle with resolving scenes with large depth variations. To resolve this, we propose imaging with a set of mask patterns displayed on a programmable mask, and introduce a computational focusing operator that helps to resolve the depth of scene points. As a result, the proposed imager can resolve dense scenes with large depth variations, allowing for more practical applications of lensless cameras. We also present a fast reconstruction algorithm for scene at multiple depths that reduces reconstruction time by two orders of magnitude. Finally, we build a prototype to show the proposed method improves both image quality and depth resolution of lensless cameras.Fuzzy objects composed of hair, fur, or feather are impossible to scan even with the latest active or passive 3D scanners. We present a novel and practical neural rendering (NR) technique called neural opacity point cloud (NOPC) to allow high quality rendering of such fuzzy objects at any viewpoint. NOPC employs a learning-based scheme to extract geometric and appearance features on 3D point clouds including their opacity. It then maps the 3D features onto virtual viewpoints where a new U-Net based NR manages to handle noisy and incomplete geometry while maintaining translation equivariance. Comprehensive experiments on existing and new datasets show our NOPC can produce photorealistic rendering on inputs from multi-view setups such as a turntable system for hair and furry toy captures.Tensor Principal Component Pursuit (TPCP) is a powerful approach in the Tensor Robust Principal Component Analysis (TRPCA), where the goal is to decompose a data tensor to a low-tubal-rank part plus a sparse residual. TPCP is shown to be effective under certain tensor incoherence conditions, which can be restrictive in practice. In this paper, we propose a Modified-TPCP, which incorporates the prior subspace information in the analysis. With the aid of prior info, the proposed method is able to recover the low-tubal-rank and the sparse components under a significantly weaker incoherence assumption. We further design an efficient algorithm to implement Modified-TPCP based upon the Alternating Direction Method of Multipliers (ADMM). The promising performance of the proposed method is supported by simulations and real data applications.

Autoři článku: Alexanderbyrne3788 (Mead Tychsen)