Troelsenlawrence6726

Z Iurium Wiki

Verze z 12. 11. 2024, 22:34, kterou vytvořil Troelsenlawrence6726 (diskuse | příspěvky) (Založena nová stránka s textem „The conventional mathematical methods are based on characteristic length, while urban form has no characteristic length in many aspects. Urban area is a sc…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

The conventional mathematical methods are based on characteristic length, while urban form has no characteristic length in many aspects. Urban area is a scale-dependence measure, which indicates the scale-free distribution of urban patterns. Thus, the urban description based on characteristic lengths should be replaced by urban characterization based on scaling. Fractal geometry is one powerful tool for the scaling analysis of cities. Fractal parameters can be defined by entropy and correlation functions. However, the question of how to understand city fractals is still pending. By means of logic deduction and ideas from fractal theory, this paper is devoted to discussing fractals and fractal dimensions of urban landscape. The main points of this work are as follows. Firstly, urban form can be treated as pre-fractals rather than real fractals, and fractal properties of cities are only valid within certain scaling ranges. Secondly, the topological dimension of city fractals based on the urban area is 0; thus, the minimum fractal dimension value of fractal cities is equal to or greater than 0. Thirdly, the fractal dimension of urban form is used to substitute the urban area, and it is better to define city fractals in a two-dimensional embedding space; thus, the maximum fractal dimension value of urban form is 2. A conclusion can be reached that urban form can be explored as fractals within certain ranges of scales and fractal geometry can be applied to the spatial analysis of the scale-free aspects of urban morphology.This paper develops a non-equilibrium thermodynamic approach to life, with particular regards to the membrane role. The Onsager phenomenological coefficients are introduced in order to point out the thermophysical properties of the cell systems. The fundamental role of the cell membrane electric potential is highlighted, in relation to ions and heat fluxes, pointing out the strictly relation between heat exchange and the membrane electric potential. A Seebeck-like and Peltier-like effects emerge in order to simplify the description of the heat and the ions fluxes. Life is described as a continuos transition between the Peltier-like effect to the Seebeck-like one, and viceversa.Industrial nitrogen liquefaction cycles are based on the Collins topology but integrate variations. Several pressure levels with liquefaction to medium pressure and compressor-expander sets are common. The cycle must be designed aiming to minimise specific power consumption rather than to maximise liquid yield. For these reasons, conclusions of general studies cannot be extrapolated directly. This article calculates the optimal share of total compressed flow to be expanded in an industrial Collins-based cycle for nitrogen liquefaction. Simulations in Unisim Design R451 using Peng Robinson EOS for nitrogen resulted in 88% expanded flow, which is greater than the 75-80% for conventional Collins cycles with helium or other substances. Optimum specific compression work resulted 430.7 kWh/ton of liquid nitrogen. For some operating conditions, the relation between liquid yield and specific power consumption was counterintuitive larger yield entailed larger consumption. Exergy analysis showed 40.3% exergy efficiency of the optimised process. The exergy destruction distribution and exergy flow across the cycle is provided. Approximately 40% of the 59.7% exergy destruction takes place in the cooling after compression. This exergy could be used for secondary applications such as industrial heating, energy storage or for lower temperature applications as heat conditioning.Probabilistic amplitude shaping-implemented through a distribution matcher (DM)-is an effective approach to enhance the performance and the flexibility of bandwidth-efficient coded modulations. Different DM structures have been proposed in the literature. Typically, both their performance and their complexity increase with the block length. Nevirapine research buy In this work, we present a hierarchical DM (Hi-DM) approach based on the combination of several DMs of different possible types, which provides the good performance of long DMs with the low complexity of several short DMs. The DMs are organized in layers. Each upper-layer DM encodes information on a sequence of lower-layer DMs, which are used as "virtual symbols". First, we describe the Hi-DM structure, its properties, and the encoding and decoding procedures. Then, we present three particular Hi-DM configurations, providing some practical design guidelines, and investigating their performance in terms of rate loss and energy loss. Finally, we compare the system performance obtained with the proposed Hi-DM structures and with their single-layer counterparts a 0.19dB SNR gain is obtained by a two-layer Hi-DM based on constant composition DMs (CCDM) compared to a single-layer CCDM with same complexity; a 0.12dB gain and a significant complexity reduction are obtained by a Hi-DM based on minimum-energy lookup tables compared to a single-layer DM based on enumerative sphere shaping with same memory requirements.In order to maximize energy efficiency in heterogeneous networks (HetNets), a turbo Q-Learning (TQL) combined with multistage decision process and tabular Q-Learning is proposed to optimize the resource configuration. For the large dimensions of action space, the problem of energy efficiency optimization is designed as a multistage decision process in this paper, according to the resource allocation of optimization objectives, the initial problem is divided into several subproblems which are solved by tabular Q-Learning, and the traditional exponential increasing size of action space is decomposed into linear increase. By iterating the solutions of subproblems, the initial problem is solved. The simple stability analysis of the algorithm is given in this paper. As to the large dimension of state space, we use a deep neural network (DNN) to classify states where the optimization policy of novel Q-Learning is set to label samples. Thus far, the dimensions of action and state space have been solved. The simulation results show that our approach is convergent, improves the convergence speed by 60% while maintaining almost the same energy efficiency and having the characteristics of system adjustment.

Autoři článku: Troelsenlawrence6726 (Lopez Celik)