Greenwooddeal0760

Z Iurium Wiki

A 24 GHz highly-linear upconversion mixer, based on a duplex transconductance path (DTP), is proposed for automotive short-range radar sensor applications using the 65-nm CMOS process. A mixer with an enhanced transconductance stage consisting of a DTP is presented to improve linearity. The main transconductance path (MTP) of the DTP includes a common source (CS) amplifier, while the secondary transconductance path (STP) of the DTP is implemented as an improved cross-quad transconductor (ICQT). Two inductors with a bypass capacitor are connected at the common nodes of the transconductance stage and switching stage of the mixer, which acts as a resonator and helps to improve the gain and isolation of the designed mixer. According to the measured results, at 24 GHz the proposed mixer shows that the linearity of output 1-dB compression point (OP1dB) is 3.9 dBm. And the input 1-dB compression point (IP1dB) is 0.9 dBm. Moreover, a maximum conversion gain (CG) of 2.49 dB and a noise figure (NF) of 3.9 dB is achieved in the designed mixer. When the supply voltage is 1.2 V, the power dissipation of the mixer is 3.24 mW. The mixer chip occupies an area of 0.42 mm2.With the advent of wearable communication devices, microstrip antennas have developed multiple applications due to their ultra-low-profile properties. Therefore, it is essential to analyze the problem of frequency shift and impedance mismatch when the antenna is bent. Selleckchem Thioflavine S For the case of a rectangular patch antenna E-plane bent on the cylindrical surface, (1) this paper introduces the effective dielectric constant into the cavity model, which can accurately predict the resonance frequency of the antenna, and (2) according to the equivalent circuit model of the antenna resonance mode, the lumped element parameters are calculated based on the above effective dielectric constant, so that impedance characteristics and the S-parameter matching the port can be quickly constructed. From the perspective of circuit frequency characteristics, it explains the change in the transmission performance of the curved antenna. The experimental results show that the maximum difference between the experimental and theoretical calculation frequencies is less than 1%. These results verify the validity and applicability of the theory in the analysis of ultra-low-profile patch antennas and wearable electronic communication devices. It provides a theoretical basis for the fast impedance matching of patch antennas under different working conditions.Current strategies for phenotyping above-ground biomass in field breeding nurseries demand significant investment in both time and labor. Unmanned aerial vehicles (UAV) can be used to derive vegetation indices (VIs) with high throughput and could provide an efficient way to predict forage yield with high accuracy. The main objective of the study is to investigate the potential of UAV-based multispectral data and machine learning approaches in the estimation of oat biomass. UAV equipped with a multispectral sensor was flown over three experimental oat fields in Volga, South Shore, and Beresford, South Dakota, USA, throughout the pre- and post-heading growth phases of oats in 2019. A variety of vegetation indices (VIs) derived from UAV-based multispectral imagery were employed to build oat biomass estimation models using four machine-learning algorithms partial least squares (PLS), support vector machine (SVM), Artificial neural network (ANN), and random forest (RF). The results showed that several VIs derived l data, along with the inclusion of textural features like crop surface model (CSM) derived height and volumetric indicators, should be considered in future studies while estimating biophysical parameters like biomass.In the field of video action classification, existing network frameworks often only use video frames as input. When the object involved in the action does not appear in a prominent position in the video frame, the network cannot accurately classify it. We introduce a new neural network structure that uses sound to assist in processing such tasks. The original sound wave is converted into sound texture as the input of the network. Furthermore, in order to use the rich modal information (images and sound) in the video, we designed and used a two-stream frame. In this work, we assume that sound data can be used to solve motion recognition tasks. To demonstrate this, we designed a neural network based on sound texture to perform video action classification tasks. Then, we fuse this network with a deep neural network that uses continuous video frames to construct a two-stream network, which is called A-IN. Finally, in the kinetics dataset, we use our proposed A-IN to compare with the image-only network. The experimental results show that the recognition accuracy of the two-stream neural network model with uesed sound data features is increased by 7.6% compared with the network using video frames. This proves that the rational use of the rich information in the video can improve the classification effect.Wearable technologies allow the measurement of unhindered activities of daily living (ADL) among patients who had a stroke in their natural settings. However, methods to extract meaningful information from large multi-day datasets are limited. This study investigated new visualization-driven time-series extraction methods for distinguishing activities from stroke and healthy adults. Fourteen stroke and fourteen healthy adults wore a wearable sensor at the L5/S1 position for three consecutive days and collected accelerometer data passively in the participant's naturalistic environment. Data from visualization facilitated selecting information-rich time series, which resulted in classification accuracy of 97.3% using recurrent neural networks (RNNs). Individuals with stroke showed a negative correlation between their body mass index (BMI) and higher-acceleration fraction produced during ADL. We also found individuals with stroke made lower activity amplitudes than healthy counterparts in all three activity bands (low, medium, and high). Our findings show that visualization-driven time series can accurately classify movements among stroke and healthy groups using a deep recurrent neural network. This novel visualization-based time-series extraction from naturalistic data provides a physical basis for analyzing passive ADL monitoring data from real-world environments. This time-series extraction method using unit sphere projections of acceleration can be used by a slew of analysis algorithms to remotely track progress among stroke survivors in their rehabilitation program and their ADL abilities.Surface-Enhanced Raman Spectroscopy (SERS) is often used for heavy metal ion detection. However, large variations in signal strength, spectral profile, and nonlinearity of measurements often cause problems that produce varying results. It raises concerns about the reproducibility of the results. Consequently, the manual classification of the SERS spectrum requires carefully controlled experimentation that further hinders the large-scale adaptation. Recent advances in machine learning offer decent opportunities to address these issues. However, well-documented procedures for model development and evaluation, as well as benchmark datasets, are missing. Towards this end, we provide the SERS spectral benchmark dataset of lead(II) nitride (Pb(NO3)2) for a heavy metal ion detection task and evaluate the classification performance of several machine learning models. We also perform a comparative study to find the best combination between the preprocessing methods and the machine learning models. The proposed model can successfully identify the Pb(NO3)2 molecule from SERS measurements of independent test experiments. In particular, the proposed model shows an 84.6% balanced accuracy for the cross-batch testing task.We present the design, fabrication, and test of a multipurpose integrated circuit (Application Specific Integrated Circuit) in AMS 0.35 µm Complementary Metal Oxide Semiconductor technology. This circuit is embedded in a scleral contact lens, combined with photodiodes enabling the gaze direction detection when illuminated and wirelessly powered by an eyewear. The gaze direction is determined by means of a centroid computation from the measured photocurrents. The ASIC is used simultaneously to detect specific eye blinking sequences to validate target designations, for instance. Experimental measurements and validation are performed on a scleral contact lens prototype integrating four infrared photodiodes, mounted on a mock-up eyeball, and combined with an artificial eyelid. The eye-tracker has an accuracy of 0.2°, i.e., 2.5 times better than current mobile video-based eye-trackers, and is robust with respect to process variations, operating time, and supply voltage. Variations of the computed gaze direction transmitted to the eyewear, when the eyelid moves, are detected and can be interpreted as commands based on blink duration or using blinks alternation on both eyes.The problem of optimizing the topography of metal structures allowing Surface Enhanced Raman Scattering (SERS) sensing is considered. We developed a model, which randomly distributes hemispheroidal particles over a given area of the glass substrate and estimates SERS capabilities of the obtained structures. We applied Power Spectral Density (PSD) analysis to modeled structures and to atomic force microscope images widely used in SERS metal island films and metal dendrites. The comparison of measured and calculated SERS signals from differing characteristics structures with the results of PSD analysis of these structures has shown that this approach allows simple identification and choosing a structure topography, which is capable of providing the maximal enhancement of Raman signal within a given set of structures of the same type placed on the substrate.This paper proposes an audio data augmentation method based on deep learning in order to improve the performance of dereverberation. Conventionally, audio data are augmented using a room impulse response, which is artificially generated by some methods, such as the image method. The proposed method estimates a reverberation environment model based on a deep neural network that is trained by using clean and recorded audio data as inputs and outputs, respectively. Then, a large amount of a real augmented database is constructed by using the trained reverberation model, and the dereverberation model is trained with the augmented database. The performance of the augmentation model was verified by a log spectral distance and mean square error between the real augmented data and the recorded data. In addition, according to dereverberation experiments, the proposed method showed improved performance compared with the conventional method.Biometric signals can be acquired with different sensors and recognized in secure identity management systems. However, it is vulnerable to various attacks that compromise the security management in many applications, such as industrial IoT. In a real-world scenario, the target template stored in the database of a biometric system can possibly be leaked, and then used to reconstruct a fake image to fool the biometric system. As such, many reconstruction attacks have been proposed, yet unsatisfactory naturalness, poor visual quality or incompleteness remains as major limitations. Thus, two reinforced palmprint reconstruction attacks are proposed. Any palmprint image, which can be easily obtained, is used as the initial image, and the region of interest is iteratively modified with deep reinforcement strategies to reduce the matching distance. In the first attack, Modification Constraint within Neighborhood (MCwN) limits the modification extent and suppresses the reckless modification. In the second attack, Batch Member Selection (BMS) selects the significant pixels (SPs) to compose the batch, which are simultaneously modified to a slighter extent to reduce the matching number and the visual-quality degradation.

Autoři článku: Greenwooddeal0760 (Browne Damm)