Nyholmdavid6095

Z Iurium Wiki

Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.The study of cosmic rays remains as one of the most challenging research fields in Physics. From the many questions still open in this area, knowledge of the type of primary for each event remains as one of the most important issues. All of the cosmic rays observatories have been trying to solve this question for at least six decades, but have not yet succeeded. The main obstacle is the impossibility of directly detecting high energy primary events, being necessary to use Monte Carlo models and simulations to characterize generated particles cascades. This work presents the results attained using a simulated dataset that was provided by the Monte Carlo code CORSIKA, which is a simulator of high energy particles interactions with the atmosphere, resulting in a cascade of secondary particles extending for a few kilometers (in diameter) at ground level. Using this simulated data, a set of machine learning classifiers have been designed and trained, and their computational cost and effectiveness compared, when classifying the type of primary under ideal measuring conditions. Additionally, a feature selection algorithm has allowed for identifying the relevance of the considered features. The results confirm the importance of the electromagnetic-muonic component separation from signal data measured for the problem. The obtained results are quite encouraging and open new work lines for future more restrictive simulations.The connection between endoreversible models of Finite-Time Thermodynamics and the corresponding real running irreversible processes is investigated by introducing two concepts which complement each other Simulation and Reconstruction. In that context, the importance of particular machine diagrams for Simulation and (reconstruction) parameter diagrams for Reconstruction is emphasized. Additionally, the treatment of internal irreversibilities through the use of contact quantities like the contact temperature is introduced into the Finite-Time Thermodynamics description of thermal processes.Recent advances in theoretical and experimental quantum computing raise the problem of verifying the outcome of these quantum computations. The recent verification protocols using blind quantum computing are fruitful for addressing this problem. Unfortunately, all known schemes have relatively high overhead. Here we present a novel construction for the resource state of verifiable blind quantum computation. This approach achieves a better verifiability of 0.866 in the case of classical output. In addition, the number of required qubits is 2N+4cN, where N and c are the number of vertices and the maximal degree in the original computation graph, respectively. In other words, our overhead is less linear in the size of the computational scale. Finally, we utilize the method of repetition and fault-tolerant code to optimise the verifiability.Aiming at the problem that it is difficult to extract fault features from the nonlinear and non-stationary vibration signals of wind turbine rolling bearings, which leads to the low diagnosis and recognition rate, a feature extraction method based on multi-island genetic algorithm (MIGA) improved variational mode decomposition (VMD) and multi-features is proposed. The decomposition effect of the VMD method is limited by the number of decompositions and the selection of penalty factors. This paper uses MIGA to optimize the parameters. The improved VMD method is used to decompose the vibration signal into a number of intrinsic mode functions (IMF), and a group of components containing the most information is selected through the Holder coefficient. For these components, multi-features based on Renyi entropy feature, singular value feature, and Hjorth parameter feature are extracted as the final feature vector, which is input to the classifier to realize the fault diagnosis of rolling bearing. The experimental results prove that the proposed method can more effectively extract the fault characteristics of rolling bearings. The fault diagnosis model based on this method can accurately identify bearing signals of 16 different fault types, severity, and damage points.The application of machine learning methods to particle physics often does not provide enough understanding of the underlying physics. An interpretable model which provides a way to improve our knowledge of the mechanism governing a physical system directly from the data can be very useful. In this paper, we introduce a simple artificial physical generator based on the Quantum chromodynamical (QCD) fragmentation process. The data simulated from the generator are then passed to a neural network model which we base only on the partial knowledge of the generator. We aimed to see if the interpretation of the generated data can provide the probability distributions of basic processes of such a physical system. This way, some of the information we omitted from the network model on purpose is recovered. We believe this approach can be beneficial in the analysis of real QCD processes.Quantifying uncertainty is a hot topic for uncertain information processing in the framework of evidence theory, but there is limited research on belief entropy in the open world assumption. In this paper, an uncertainty measurement method that is based on Deng entropy, named Open Deng entropy (ODE), is proposed. In the open world assumption, the frame of discernment (FOD) may be incomplete, and ODE can reasonably and effectively quantify uncertain incomplete information. On the basis of Deng entropy, the ODE adopts the mass value of the empty set, the cardinality of FOD, and the natural constant e to construct a new uncertainty factor for modeling the uncertainty in the FOD. Guadecitabine Numerical example shows that, in the closed world assumption, ODE can be degenerated to Deng entropy. An ODE-based information fusion method for sensor data fusion is proposed in uncertain environments. By applying it to the sensor data fusion experiment, the rationality and effectiveness of ODE and its application in uncertain information fusion are verified.In this study, the problem of dynamic channel access in distributed underwater acoustic sensor networks (UASNs) is considered. First, we formulate the dynamic channel access problem in UASNs as a multi-agent Markov decision process, wherein each underwater sensor is considered an agent whose objective is to maximize the total network throughput without coordinating with or exchanging messages among different underwater sensors. We then propose a distributed deep Q-learning-based algorithm that enables each underwater sensor to learn not only the behaviors (i.e., actions) of other sensors, but also the physical features (e.g., channel error probability) of its available acoustic channels, in order to maximize the network throughput. We conduct extensive numerical evaluations and verify that the performance of the proposed algorithm is similar to or even better than the performance of baseline algorithms, even when implemented in a distributed manner.Forecasting stock prices plays an important role in setting a trading strategy or determining the appropriate timing for buying or selling a stock. The use of technical analysis for financial forecasting has been successfully employed by many researchers. The existing qualitative based methods developed based on fuzzy reasoning techniques cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. Extended fuzzy sets (e.g., fuzzy probabilistic set) study the fuzziness of the membership grade to a concept. The cloud model, based on probability measure space, automatically produces random membership grades of a concept through a cloud generator. In this paper, a cloud model-based approach was proposed to confirm accurate stock based on Japanese candlestick. By incorporating probability statistics and fuzzy set theories, the cloud model can aid the required transformation between the qualitative concepts and quantitative data. The degree of certainty associated with candlestick patterns can be calculated through repeated assessments by employing the normal cloud model. The hybrid weighting method comprising the fuzzy time series, and Heikin-Ashi candlestick was employed for determining the weights of the indicators in the multi-criteria decision-making process. Fuzzy membership functions are constructed by the cloud model to deal effectively with uncertainty and vagueness of the stock historical data with the aim to predict the next open, high, low, and close prices for the stock. The experimental results prove the feasibility and high forecasting accuracy of the proposed model.Markov processes, such as random walk models, have been successfully used by cognitive and neural scientists to model human choice behavior and decision time for over 50 years. Recently, quantum walk models have been introduced as an alternative way to model the dynamics of human choice and confidence across time. Empirical evidence points to the need for both types of processes, and open system models provide a way to incorporate them both into a single process. However, some of the constraints required by open system models present challenges for achieving this goal. The purpose of this article is to address these challenges and formulate open system models that have good potential to make important advancements in cognitive science.Credit scoring is an important tool used by financial institutions to correctly identify defaulters and non-defaulters. Support Vector Machines (SVM) and Random Forest (RF) are the Artificial Intelligence techniques that have been attracting interest due to their flexibility to account for various data patterns. Both are black-box models which are sensitive to hyperparameter settings. Feature selection can be performed on SVM to enable explanation with the reduced features, whereas feature importance computed by RF can be used for model explanation. The benefits of accuracy and interpretation allow for significant improvement in the area of credit risk and credit scoring. This paper proposes the use of Harmony Search (HS), to form a hybrid HS-SVM to perform feature selection and hyperparameter tuning simultaneously, and a hybrid HS-RF to tune the hyperparameters. A Modified HS (MHS) is also proposed with the main objective to achieve comparable results as the standard HS with a shorter computational time. MHS consists of four main modifications in the standard HS (i) Elitism selection during memory consideration instead of random selection, (ii) dynamic exploration and exploitation operators in place of the original static operators, (iii) a self-adjusted bandwidth operator, and (iv) inclusion of additional termination criteria to reach faster convergence.

Autoři článku: Nyholmdavid6095 (Degn Hassan)