Thorntonrooney8249
Multimodal sensing can provide a comprehensive and accurate diagnosis of biological information. This paper presents a fully integrated wireless multimodal sensing chip with voltammetric electrochemical sensing at a scanning rate range of 0.08400 V/s, temperature monitoring, and bi-phasic electrical stimulation for wound healing progress monitoring. The time-based readout circuitry can achieve a 120X scalable resolution through dynamic threshold voltage adjustment. A low-noise analog waveform generator is designed using current reducer techniques to eliminate the large passive components. The chip is fabricated via a 0.18 m CMOS process. The design achieves R2 linearity of 0.995 over a wide current detection range (2 pA12 A) while consuming 49 W at 1.2 V supply. The temperature sensing circuit achieves a 43 mK resolution from 20 to 80 degrees. The current stimulator provides an output current ranging from 8 A to 1 mA in an impedance range of up to 3 k. A wakeup receiver with data correlators is used to control the operation modes. The sensing data are wirelessly transmitted to the external readers. The proposed sensing IC is verified for measuring critical biomarkers, including C-reactive protein, uric acid, and temperature.Identifying cell types is one of the main goals of single-cell RNA sequencing (scRNA-seq) analysis, and clustering is a common method for this item. However, the massive amount of data and the excess noise level bring challenge for single cell clustering. To address this challenge, in this paper, we introduced a novel method named single-cell clustering based on denoising autoencoder and graph convolution network (scCDG), which consists of two core models. The first model is a denoising autoencoder (DAE) used to fit the data distribution for data denoising. The second model is a graph autoencoder using graph convolution network (GCN), which projects the data into a low-dimensional space (compressed) preserving topological structure information and feature information in scRNA-seq data simultaneously. Extensive analysis on seven real scRNA-seq datasets demonstrate that scCDG outperforms state-of-the-art methods in some research sub-fields, including single cell clustering, visualization of transcriptome landscape, and trajectory inference.Identification of transcription factor binding sites (TFBSs) is essential for revealing the rules of protein-DNA binding. Although some computational methods have been presented to predict TFBSs using epigenomic and sequence features, most of them ignore the common features among cross-cell types. It is still unclear to what extent the common features could help for this task. To this end, we proposed a new method (named Attention-augmented Convolutional Neural Network, or ACNN) to predict TFBSs. ACNN uses attention-augmented convolutional layers to capture global and local contexts in DNA sequences, and employs the convolutional layers to capture features of histone modification markers. In addition, ACNN adopts the private and shared convolutional neural network (CNN) modules to learn specific and common features, respectively. To encourage the shared CNN module to learn the common features, adversarial training is applied in ACNN. The results on 253 ChIP-seq datasets show that ACNN outperforms other existing methods. ETC-1922159 The attention-augmented convolutional layers and adversarial training mechanism in ACNN can effectively improve the prediction performance. Moreover, in the case of limited labeled data, ACNN also performs better than a baseline method. We further visualize the convolution kernels as motifs to explain the interpretability of ACNN.Electrochemical impedance spectroscopy (EIS) is gaining immense popularity in the current times due to the ease of integration with microelectronics. Keeping this aspect in mind, various detection schemes have been developed to make impedance detection of nucleic acids more specific. In this context, the current work makes a strong case for specific DNA detection through EIS using nanoparticle labeling approach and also an added selectivity step through the use of dielectrophoresis (DEP), which enhances the detection sensitivity and specificity to match the detection capability of quantitative polymerase chain reaction (qPCR) in real-time context as compared to the individually amplified DNA 1. The detection limit of the proposed biochip is observed to be 3-4 PCR cycles for 582 bp bacterial DNA, where the complete procedure of detection starts in less than 10 min. The process of integrated DEP capture of labeled products coming out of PCR and their impedance-assisted detection is carried out in an in-house micro-fabricated biochip. The gold nanoparticles, which possess excellent optical, chemical, electronic, and biocompatibility properties and are capable of generating lump-like DNA structure without modifying its basic impedance signature are introduced to the amplified DNA through the nanoparticle labeled primers.Magnetic nanoparticles (MNPs) have been widely studied for use in biomedical and industrial applications. The frequency dependence of the magnetization of magnetic nanoparticles is analyzed for different AC excitation fields. We employ a Fokker-Planck equation, which accurately describes AC magnetization dynamics and analyze the difference in AC susceptibility between Fokker-Planck equation and Debye model. Based on these results we proposed a simple, empirical AC susceptibility model. Simulation and experimental results showed that the proposed empirical model accurately describes AC susceptibility, and the AC susceptibility constructed with the proposed empirical equation based on Debye model agrees well with the measured results. Therefore, we can utilize the proposed empirical model in biomedical applications, such as the estimation of the hydrodynamic size and temperature, which is expected to apply to biologicals assays and hyperthermia.Caricature is a kind of artistic style of human faces that attracts considerable attention in entertainment industry. So far a few 3D caricature generation methods exist and all of them require some caricature information (e.g., a caricature sketch or 2D caricature) as input. This kind of input, however, is difficult to provide by non-professional users. In this paper, we propose an end-to-end deep neural network model that generates high-quality 3D caricature directly from a simple normal face photo. The most challenging issue in our system is that the source domain of face photos (characterized by 2D normal faces) is significantly different from the target domain of 3D caricatures (characterized by 3D exaggerated face shapes and texture). To address this challenge, we (1) build a large dataset of 6,100 3D caricature meshes and use it to establish a PCA model in the 3D caricature shape space, (2) reconstruct a 3D normal full head from the input face photo and use its PCA representation in the 3D caricature shape space to set up correspondence between the input photo and 3D caricature shape, and (3) propose a novel character loss and a novel caricature loss based on previous psychological studies on caricatures. Experiments including a novel two-level user study show that our system can generate high-quality 3D caricatures directly from normal face photos.We present a novel two-stage approach for automated floorplan design in residential buildings with a given exterior wall boundary. Our approach has the unique advantage of being human-centric, that is, the generated floorplans can be geometrically plausible, as well as topologically reasonable to enhance resident interaction with the environment. From the input boundary, we first synthesize a human-activity map that reflects both the spatial configuration and human-environment interaction in an architectural space. We propose to produce the human-activity map either automatically by a pre-trained generative adversarial network (GAN) model, or semi-automatically by synthesizing it with user manipulation of the furniture. Second, we feed the human-activity map into our deep framework ActFloor-GAN to guide a pixel-wise prediction of room types. We adopt a re-formulated cycle-consistency constraint in ActFloor-GAN to maximize the overall prediction performance, so that we can produce high-quality room layouts that are readily convertible to vectorized floorplans. Experimental results show several benefits of our approach. First, a quantitative analysis of ablated techniques shows superior performance of leveraging the human-activity map in predicting piecewise room types. Second, a subjective evaluation by architects shows that our results have compelling quality as professionally-designed floorplans and much better than those generated by existing methods in terms of the room layout topology. Last, our approach allows manipulating the furniture placement, considers the human activities in the environment, and enables the incorporation of user-design preferences.Spatial redundancy commonly exists in the learned representations of convolutional neural networks (CNNs), leading to unnecessary computation on high-resolution features. In this paper, we propose a novel Spatially Adaptive feature Refinement (SAR) approach to reduce such superfluous computation. It performs efficient inference by adaptively fusing information from two branches one conducts standard convolution on input features at a lower spatial resolution, and the other one selectively refines a set of regions at the original resolution. The two branches complement each other in feature learning, and both of them evoke much less computation than standard convolution. SAR is a flexible method that can be conveniently plugged into existing CNNs to establish models with reduced spatial redundancy. Experiments on CIFAR and ImageNet classification, COCO object detection and PASCAL VOC semantic segmentation tasks validate that the proposed SAR can consistently improve the network performance and efficiency. Notably, our results show that SAR only refines less than 40% of the regions in the feature representations of a ResNet for 97% of the samples in the validation set of ImageNet to achieve comparable accuracy with the original model, revealing the high computational redundancy in the spatial dimension of CNNs.Scene text erasing, which replaces text regions with reasonable content in natural images, has drawn significant attention in the computer vision community in recent years. There are two potential subtasks in scene text erasing text detection and image inpainting. Both subtasks require considerable data to achieve better performance; however, the lack of a large-scale real-world scene-text removal dataset does not allow existing methods to realize their potential. To compensate for the lack of pairwise real-world data, we made considerable use of synthetic text after additional enhancement and subsequently trained our model only on the dataset generated by the improved synthetic text engine. Our proposed network contains a stroke mask prediction module and background inpainting module that can extract the text stroke as a relatively small hole from the cropped text image to maintain more background content for better inpainting results. This model can partially erase text instances in a scene image with a bounding box or work with an existing scene-text detector for automatic scene text erasing.