Velasquezhuber2335

Z Iurium Wiki

Verze z 12. 8. 2024, 17:22, kterou vytvořil Velasquezhuber2335 (diskuse | příspěvky) (Založena nová stránka s textem „Moreover, if only unambiguous results were considered, the use of a neural network gives much better results than other fusion methods. If we allow ambigui…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Moreover, if only unambiguous results were considered, the use of a neural network gives much better results than other fusion methods. If we allow ambiguity, some fusion methods are slightly better, but it is the result of this fact that it is possible to generate few decisions for the test object.This paper presents a new approach for denoising Partial Discharge (PD) signals using a hybrid algorithm combining the adaptive decomposition technique with Entropy measures and Group-Sparse Total Variation (GSTV). Initially, the Empirical Mode Decomposition (EMD) technique is applied to decompose a noisy sensor data into the Intrinsic Mode Functions (IMFs), Mutual Information (MI) analysis between IMFs is carried out to set the mode length K. Then, the Variational Mode Decomposition (VMD) technique decomposes a noisy sensor data into K number of Band Limited IMFs (BLIMFs). The BLIMFs are separated as noise, noise-dominant, and signal-dominant BLIMFs by calculating the MI between BLIMFs. Eventually, the noise BLIMFs are discarded from further processing, noise-dominant BLIMFs are denoised using GSTV, and the signal BLIMFs are added to reconstruct the output signal. The regularization parameter λ for GSTV is automatically selected based on the values of Dispersion Entropy of the noise-dominant BLIMFs. The effectiveness of the proposed denoising method is evaluated in terms of performance metrics such as Signal-to-Noise Ratio, Root Mean Square Error, and Correlation Coefficient, which are are compared to EMD variants, and the results demonstrated that the proposed approach is able to effectively denoise the synthetic Blocks, Bumps, Doppler, Heavy Sine, PD pulses and real PD signals.The invitation to contribute to this anthology of articles on the fractional calculus (FC) encouraged submissions in which the authors look behind the mathematics and examine what must be true about the phenomenon to justify the replacement of an integer-order derivative with a non-integer-order (fractional) derivative (FD) before discussing ways to solve the new equations [...].Active Inference (AIF) is a framework that can be used both to describe information processing in naturally intelligent systems, such as the human brain, and to design synthetic intelligent systems (agents). DZD9008 In this paper we show that Expected Free Energy (EFE) minimisation, a core feature of the framework, does not lead to purposeful explorative behaviour in linear Gaussian dynamical systems. We provide a simple proof that, due to the specific construction used for the EFE, the terms responsible for the exploratory (epistemic) drive become constant in the case of linear Gaussian systems. This renders AIF equivalent to KL control. From a theoretical point of view this is an interesting result since it is generally assumed that EFE minimisation will always introduce an exploratory drive in AIF agents. While the full EFE objective does not lead to exploration in linear Gaussian dynamical systems, the principles of its construction can still be used to design objectives that include an epistemic drive. We provide an in-depth analysis of the mechanics behind the epistemic drive of AIF agents and show how to design objectives for linear Gaussian dynamical systems that do include an epistemic drive. Concretely, we show that focusing solely on epistemics and dispensing with goal-directed terms leads to a form of maximum entropy exploration that is heavily dependent on the type of control signals driving the system. Additive controls do not permit such exploration. From a practical point of view this is an important result since linear Gaussian dynamical systems with additive controls are an extensively used model class, encompassing for instance Linear Quadratic Gaussian controllers. On the other hand, linear Gaussian dynamical systems driven by multiplicative controls such as switching transition matrices do permit an exploratory drive.A model for a pumped thermal energy storage system is presented. It is based on a Brayton cycle working successively as a heat pump and a heat engine. All the main irreversibility sources expected in real plants are considered external losses arising from the heat transfer between the working fluid and the thermal reservoirs, internal losses coming from pressure decays, and losses in the turbomachinery. Temperatures considered for the numerical analysis are adequate for solid thermal reservoirs, such as a packed bed. Special emphasis is paid to the combination of parameters and variables that lead to physically acceptable configurations. Maximum values of efficiencies, including round-trip efficiency, are obtained and analyzed, and optimal design intervals are provided. Round-trip efficiencies of around 0.4, or even larger, are predicted. The analysis indicates that the physical region, where the coupled system can operate, strongly depends on the irreversibility parameters. In this way, maximum values of power output, efficiency, round-trip efficiency, and pumped heat might lay outside the physical region. In that case, the upper values are considered. The sensitivity analysis of these maxima shows that changes in the expander/turbine and the efficiencies of the compressors affect the most with respect to a selected design point. In the case of the expander, these drops are mostly due to a decrease in the area of the physical operation region.Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Mode-averaging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM) a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains.Vuilleumier refrigerators are a special type of heat-driven cooling machines. Essentially, they operate by using heat from a hot bath to pump heat from a cold bath to an environment at intermediate temperatures. In addition, some external energy in the form of electricity can be used as an auxiliary driving mechanism. Such refrigerators are, for example, advantageous in situations where waste heat is available and cooling power is needed. Here, the question of how the performance of Vuilleumier refrigerators can be improved is addressed with a particular focus on the piston motion and thus the thermodynamic cycle of the refrigerator. In order to obtain a quantitative estimate of the possible cooling power gain, a special class of piston movements (the AS motion class explained below) is used, which was already used successfully in the context of Stirling engines. We find improvements of the cooling power of more than 15%.Automatically selecting a set of representative views of a 3D virtual cultural relic is crucial for constructing wisdom museums. There is no consensus regarding the definition of a good view in computer graphics; the same is true of multiple views. View-based methods play an important role in the field of 3D shape retrieval and classification. However, it is still difficult to select views that not only conform to subjective human preferences but also have a good feature description. In this study, we define two novel measures based on information entropy, named depth variation entropy and depth distribution entropy. These measures were used to determine the amount of information about the depth swings and different depth quantities of each view. Firstly, a canonical pose 3D cultural relic was generated using principal component analysis. A set of depth maps obtained by orthographic cameras was then captured on the dense vertices of a geodesic unit-sphere by subdividing the regular unit-octahedron. Afterwards, the two measures were calculated separately on the depth maps gained from the vertices and the results on each one-eighth sphere form a group. The views with maximum entropy of depth variation and depth distribution were selected, and further scattered viewpoints were selected. Finally, the threshold word histogram derived from the vector quantization of salient local descriptors on the selected depth maps represented the 3D cultural relic. The viewpoints obtained by the proposed method coincided with an arbitrary pose of the 3D model. The latter eliminated the steps of manually adjusting the model's pose and provided acceptable display views for people. In addition, it was verified on several datasets that the proposed method, which uses the Bag-of-Words mechanism and a deep convolution neural network, also has good performance regarding retrieval and classification when dealing with only four views.Reservoir computing is a machine learning method that solves tasks using the response of a dynamical system to a certain input. As the training scheme only involves optimising the weights of the responses of the dynamical system, this method is particularly suited for hardware implementation. Furthermore, the inherent memory of dynamical systems which are suitable for use as reservoirs mean that this method has the potential to perform well on time series prediction tasks, as well as other tasks with time dependence. However, reservoir computing still requires extensive task-dependent parameter optimisation in order to achieve good performance. We demonstrate that by including a time-delayed version of the input for various time series prediction tasks, good performance can be achieved with an unoptimised reservoir. Furthermore, we show that by including the appropriate time-delayed input, one unaltered reservoir can perform well on six different time series prediction tasks at a very low computational expense. Our approach is of particular relevance to hardware implemented reservoirs, as one does not necessarily have access to pertinent optimisation parameters in physical systems but the inclusion of an additional input is generally possible.Estimation of the probability density function from the statistical power moments presents a challenging nonlinear numerical problem posed by unbalanced nonlinearities, numerical instability and a lack of convergence, especially for larger numbers of moments. Despite many numerical improvements over the past two decades, the classical moment problem of maximum entropy (MaxEnt) is still a very demanding numerical and statistical task. Among others, it was presented how Fup basis functions with compact support can significantly improve the convergence properties of the mentioned nonlinear algorithm, but still, there is a lot of obstacles to an efficient pdf solution in different applied examples. Therefore, besides the mentioned classical nonlinear Algorithm 1, in this paper, we present a linear approximation of the MaxEnt moment problem as Algorithm 2 using exponential Fup basis functions. Algorithm 2 solves the linear problem, satisfying only the proposed moments, using an optimal exponential tension parameter that maximizes Shannon entropy.

Autoři článku: Velasquezhuber2335 (Krause Torp)