Ahmedknudsen1107
INTRODUCTION Drug cue reactivity (DCR) is widely used in experimental settings for both assessment and intervention. There is no validated database of pictorial cues available for methamphetamine and opioids. METHODS 360 images in three-groups (methamphetamine, opioid and neutral (control)) matched for their content (objects, hands, faces and actions) were selected in an initial development phase. 28 participants with a history of both methamphetamine and opioid use (37.71 ± 8.11 years old, 12 female) with over six months of abstinence were asked to rate images for craving, valence, arousal, typicality and relatedness. RESULTS All drug images were differentiated from neutral images. Drug related images received higher arousal and lower valence ratings compared to neutral images (craving (0-100) for neutral (11.5 ± 21.9), opioid (87.7 ± 18.5) and methamphetamine (88 ± 18), arousal (1-9) for neutral (2.4 ± 1.9), opioid (4.6 ± 2.7) and methamphetamine (4.6 ± 2.6), and valence (1-9) for neutral (4.8 ± 1.3), opioid (4.4 ± 1.9) and methamphetamine (4.4 ± 1.8)). There is no difference between methamphetamine and opioid images in craving, arousal and valence. There is a significant positive relationship between the amount of time that participants spent on drug-related images and the craving they reported for the image. Every 10 points of craving were associated with an increased response time of 383 ms. https://www.selleckchem.com/products/r428.html Three image sets were automatically selected for equivalent fMRI tasks (methamphetamine and opioids) from the database (tasks are available at github). CONCLUSION The methamphetamine and opioid cue database (MOCD) provides a resource of validated images/tasks for future DCR studies. Additionally, researchers can select several sets of unique but equivalent images based-on their psychological/physical characteristics for multiple assessments/interventions. As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined. As a famous multivariable analysis technique, regression methods, such as ridge regression, are widely used for image representation and dimensionality reduction. However, the metric of ridge regression and its variants is always the Frobenius norm (F-norm), which is sensitive to outliers and noise in data. At the same time, the performance of the ridge regression and its extensions is limited by the class number of the data. To address these problems, we propose a novel regression learning method which named low-rank discriminative regression learning (LDRL) for image representation. LDRL assumes that the input data is corrupted and thus the L1 norm can be used as a sparse constraint on the noised matrix to recover the clean data for regression, which can improve the robustness of the algorithm. Due to learn a novel project matrix that is not limited by the number of classes, LDRL is suitable for classifying the data set no matter whether there is a small or large number of classes. The performance of the proposed LDRL is evaluated on six public image databases. The experimental results prove that LDRL obtains better performance than existing regression methods. The synchronization problem for complex networks with time-varying delays of unknown bound is investigated in this paper. From the impulsive control point of view, a novel delayed impulsive differential inequality is proposed, where the bounds of time-varying delays in continuous dynamic and discrete dynamic are both unknown. Based on the inequality, a class of delayed impulsive controllers is designed to achieve the synchronization of complex networks, where the restriction between impulses interval and time-varying delays is dropped. A numerical example is presented to illustrate the effectiveness of the obtained results. In this paper, we propose a novel hyper-Laplacian regularized multiview subspace clustering with low-rank tensor constraint method, which is referred as HLR-MSCLRT. In the HLR-MSCLRT model, the subspace representation matrices of different views are stacked as a tensor, and then the high order correlations among data can be captured. To reduce the redundancy information of the learned subspace representations, a low-rank constraint is adopted to the constructed tensor. Since data in the real world often reside in multiple nonlinear subspaces, the HLR-MSCLRT model utilizes the hyper-Laplacian graph regularization to preserve the local geometry structure embedded in a high-dimensional ambient space. An efficient algorithm is also presented to solve the optimization problem of the HLR-MSCLRT model. The experimental results on some data sets show that the proposed HLR-MSCLRT model outperforms many state-of-the-art multi-view clustering approaches.