Nolantemple0153

Z Iurium Wiki

Traditional neural network compression (NNC) methods decrease the model size and floating-point operations (FLOPs) in the manner of screening out unimportant weight parameters; however, the intrinsic sparsity characteristics have not been fully exploited. In this article, from the perspective of signal processing and analysis for network parameters, we propose to use a compressive sensing (CS)-based method, namely NNCS, for performance improvements. Our proposed NNCS is inspired by the discovery that sparsity levels of weight parameters in the transform domain are greater than those in the original domain. First, to achieve sparse representations for parameters in the transform domain during training, we incorporate a constrained CS model into loss function. Second, the proposed effective training process consists of two steps, where the first step trains raw weight parameters and induces and reconstructs their sparse representations and the second step trains transform coefficients to improve network performances. Finally, we transform the entire neural network into another new domain-based representation, and a sparser parameter distribution can be obtained to facilitate inference acceleration. Experimental results demonstrate that NNCS can significantly outperform the other existing state-of-the-art methods in terms of parameter reductions and FLOPs. With VGGNet on CIFAR-10, we decrease 94.8% parameters and achieve a 76.8% reduction of FLOPs, with 0.13% drop in Top-1 accuracy. With ResNet-50 on ImageNet, we decrease 75.6% parameters and achieve a 78.9% reduction of FLOPs, with 1.24% drop in Top-1 accuracy.Supervised learning can be viewed as distilling relevant information from input data into feature representations. This process becomes difficult when supervision is noisy as the distilled information might not be relevant. In fact, recent research shows that networks can easily overfit all labels including those that are corrupted, and hence can hardly generalize to clean datasets. In this article, we focus on the problem of learning with noisy labels and introduce compression inductive bias to network architectures to alleviate this overfitting problem. More precisely, we revisit one classical regularization named Dropout and its variant Nested Dropout. Dropout can serve as a compression constraint for its feature dropping mechanism, while Nested Dropout further learns ordered feature representations with respect to feature importance. Moreover, the trained models with compression regularization are further combined with co-teaching for performance boost. Theoretically, we conduct bias variance decomposition of the objective function under compression regularization. We analyze it for both single model and co-teaching. This decomposition provides three insights 1) it shows that overfitting is indeed an issue in learning with noisy labels; 2) through an information bottleneck formulation, it explains why the proposed feature compression helps in combating label noise; and 3) it gives explanations on the performance boost brought by incorporating compression regularization into co-teaching. Experiments show that our simple approach can have comparable or even better performance than the state-of-the-art methods on benchmarks with real-world label noise including Clothing1M and ANIMAL-10N. Our implementation is available at https//yingyichen-cyy.github.io/ CompressFeatNoisyLabels/.Fuzzy neural networks (FNNs) hold the advantages of knowledge leveraging and adaptive learning, which have been widely used in nonlinear system modeling. However, it is difficult for FNNs to obtain the appropriate structure in the situation of insufficient data, which limits its generalization performance. To solve this problem, a data-knowledge-driven self-organizing FNN (DK-SOFNN) with a structure compensation strategy and a parameter reinforcement mechanism is proposed in this article. First, a structure compensation strategy is proposed to mine structural information from empirical knowledge to learn the structure of DK-SOFNN. Then, a complete model structure can be acquired by sufficient structural information. Second, a parameter reinforcement mechanism is developed to determine the parameter evolution direction of DK-SOFNN that is most suitable for the current model structure. Then, a robust model can be obtained by the interaction between parameters and dynamic structure. Finally, the proposed DK-SOFNN is theoretically analyzed on the fixed structure case and dynamic structure case. Then, the convergence conditions can be obtained to guide practical applications. The merits of DK-SOFNN are demonstrated by some benchmark problems and industrial applications.Origami architecture (OA) is a fascinating papercraft that involves only a piece of paper with cuts and folds. Interesting geometric structures 'pop up' when the paper is opened. However, manually designing such a physically valid 2D paper pop-up plan is challenging since fold lines must jointly satisfy hard spatial constraints. Existing works on automatic OA-style paper pop-up design all focused on how to generate a pop-up structure that approximates a given target 3D model. This paper presents the first OA-style paper pop-up design framework that takes 2D images instead of 3D models as input. Our work is inspired by the fact that artists often use 2D profiles to guide the design process, thus benefited from the high availability of 2D image resources. Due to the lack of 3D geometry information, we perform novel theoretic analysis to ensure the foldability and stability of the resultant design. Based on a novel graph representation of the paper pop-up plan, we further propose a practical optimization algorithm via mixed-integer programming that jointly optimizes the topology and geometry of the 2D plan. We also allow the user to interactively explore the design space by specifying constraints on fold lines. Finally, we evaluate our framework on various images with interesting 2D shapes. Experiments and comparisons exhibit both the efficacy and efficiency of our framework.This paper presents a neuromorphic processing system with a spike-driven spiking neural network (SNN) processor design for always-on wearable electrocardiogram (ECG) classification. In the proposed system, the ECG signal is captured by level crossing (LC) sampling, achieving native temporal coding with single-bit data representation, which is directly fed into an SNN in an event-driven manner. Uprosertib cell line A hardware-aware spatio-temporal backpropagation (STBP) is suggested as the training scheme to adapt to the LC-based data representation and to generate lightweight SNN models. Such a training scheme diminishes the firing rate of the network with little plenty of classification accuracy loss, thus reducing the switching activity of the circuits for low-power operation. A specialized SNN processor is designed with the spike-driven processing flow and hierarchical memory access scheme. Validated with field programmable gate arrays (FPGA) and evaluated in 40 nm CMOS technology for application-specific integrated circuit (ASIC) design, the SNN processor can achieve 98.22% classification accuracy on the MIT-BIH database for 5-category classification, with an energy efficiency of 0.75 μJ/classification.Human brain cortex acts as a rich inspiration source for constructing efficient artificial cognitive systems. In this paper, we investigate to incorporate multiple brain-inspired computing paradigms for compact, fast and high-accuracy neuromorphic hardware implementation. We propose the TripleBrain hardware core that tightly combines three common brain-inspired factors the spike-based processing and plasticity, the self-organizing map (SOM) mechanism and the reinforcement learning scheme, to improve object recognition accuracy and processing throughput, while keeping low resource costs. The proposed hardware core is fully event-driven to mitigate unnecessary operations, and enables various on-chip learning rules (including the proposed SOM-STDP & R-STDP rule and the R-SOM-STDP rule regarded as the two variants of our TripleBrain learning rule) with different accuracy-latency tradeoffs to satisfy user requirements. An FPGA prototype of the neuromorphic core was implemented and elaborately tested. It realized high-speed learning (1349 frame/s) and inference (2698 frame/s), and obtained comparably high recognition accuracies of 95.10%, 80.89%, 100%, 94.94%, 82.32%, 100% and 97.93% on the MNIST, ETH-80, ORL-10, Yale-10, N-MNIST, Poker-DVS and Posture-DVS datasets, respectively, while only consuming 4146 (7.59%) slices, 32 (3.56%) DSPs and 131 (24.04%) Block RAMs on a Xilinx Zynq-7045 FPGA chip. Our neuromorphic core is very attractive for real-time resource-limited edge intelligent systems.Temporal action localization is currently an active research topic in computer vision and machine learning due to its usage in smart surveillance. It is a challenging problem since the categories of the actions must be classified in untrimmed videos and the start and end of the actions need to be accurately found. Although many temporal action localization methods have been proposed, they require substantial amounts of computational resources for the training and inference processes. To solve these issues, in this work, a novel temporal-aware relation and attention network (abbreviated as TRA) is proposed for the temporal action localization task. TRA has an anchor-free and end-to-end architecture that fully uses temporal-aware information. Specifically, a temporal self-attention module is first designed to determine the relationship between different temporal positions, and more weight is given to features within the actions. Then, a multiple temporal aggregation module is constructed to aggregate the temporal domain information. Finally, a graph relation module is designed to obtain the aggregated graph features, which are used to refine the boundaries and classification results. Most importantly, these three modules are jointly explored in a unified framework, and temporal awareness is always fully used. Extensive experiments demonstrate that the proposed method can outperform all state-of-the-art methods on the THUMOS14 dataset with an average mAP that reaches 67.6% and obtain a comparable result on the ActivityNet1.3 dataset with an average mAP that reaches 34.4%. Compared with A2Net (TIP20), PCG-TAL (TIP21), and AFSD (CVPR21) TRA can achieve improvements of 11.7%, 4.4%, and 1.8%, respectively on the THUMOS14 dataset.One of the biological features of cancer cells is their aerobic glycolysis by extensive glucose fermentation to harvest energy, so called Warburg effect. Melanoma is one of the most aggressive human cancers with poor prognosis and high mortality for its high metastatic ability. During the metastatic process, the metastatic tumor cells should survive under detachment stress. However, whether the detachment stress could affect the tumor phenotype is worthy to investigate. We had established the cell model of human melanoma cells under detachment stress, which mimicked circulating melanoma. It had been demonstrated that the detachment stress altered melanoma cell activities, malignancy, and drug sensitivity. In this study, we found that adherent melanoma cells were more sensitive to glucose depletion. Gene expression profiling altered expressions of transporters associated with glucose metabolism. In addition, detachment stress reduced lactate secretion owing to the reduced MCT4 and GLUT1 expressions, the altered glycolytic and respiratory capacities, and the increased superoxide production.

Autoři článku: Nolantemple0153 (Jain Damborg)