Ralstonjames1031
3762; the coefficient of determination R2 reached 0.9932; the number of individual evaluation values with a relative error of less than 10% and 5% accounted for 100% and 87.50% of the total number of evaluations, respectively.Laser cutting belongs to non-contact processing, which is different from traditional turning and milling. In order to improve the machining accuracy of laser cutting, a thermal error prediction and dynamic compensation strategy for laser cutting is proposed. Based on the time-varying characteristics of the digital twin technology, a hybrid model combining the thermal elastic-plastic finite element (TEP-FEM) and T-XGBoost algorithms is established. The temperature field and thermal deformation under 12 common working conditions are simulated and analyzed with TEP-FEM. Real-time machining data obtained from TEP-FEM simulation is used in intelligent algorithms. Based on the XGBoost algorithm and the simulation data set as the training data set, a time-series-based segmentation algorithm (T-XGBoost) is proposed. This algorithm can reduce the maximum deformation at the slit by more than 45%. At the same time, by reducing the average volume strain under most working conditions, the lifting rate can reach 63% at the highest, and the machining result is obviously better than XGBoost. The strategy resolves the uncontrollable thermal deformation during cutting and provides theoretical solutions to the implementation of the intelligent operation strategies such as predictive machining and quality monitoring.The establishment of a laser link between satellites, i.e., the acquisition phase, is a key technology for space-based gravitational detection missions, and it becomes extremely complicated when the long distance between satellites, the inherent limits of the sensor accuracy, the narrow laser beam divergence and the complex space environment are considered. In this paper, we investigate the laser acquisition problem of a new type of satellite equipped with two two-degree-of-freedom telescopes. A predefined-time controller law for the acquisition phase is proposed. Finally, a numerical simulation was conducted to demonstrate the effectiveness of the proposed controller. The results showed that the new strategy has a higher efficiency and the control performance can meet the requirements of the gravitational detection mission.Human action recognition and detection from unmanned aerial vehicles (UAVs), or drones, has emerged as a popular technical challenge in recent years, since it is related to many use case scenarios from environmental monitoring to search and rescue. It faces a number of difficulties mainly due to image acquisition and contents, and processing constraints. Since drones' flying conditions constrain image acquisition, human subjects may appear in images at variable scales, orientations, and occlusion, which makes action recognition more difficult. We explore low-resource methods for ML (machine learning)-based action recognition using a previously collected real-world dataset (the "Okutama-Action" dataset). This dataset contains representative situations for action recognition, yet is controlled for image acquisition parameters such as camera angle or flight altitude. We investigate a combination of object recognition and classifier techniques to support single-image action identification. Our architecture integrates YoloV5 with a gradient boosting classifier; the rationale is to use a scalable and efficient object recognition system coupled with a classifier that is able to incorporate samples of variable difficulty. In an ablation study, we test different architectures of YoloV5 and evaluate the performance of our method on Okutama-Action dataset. Our approach outperformed previous architectures applied to the Okutama dataset, which differed by their object identification and classification pipeline we hypothesize that this is a consequence of both YoloV5 performance and the overall adequacy of our pipeline to the specificities of the Okutama dataset in terms of bias-variance tradeoff.Cloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge-fog-cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machine learning applications processing data at the edge). Data containers are interconnected through a secure and efficient content delivery network, which transparently and automatically performs the continuous delivery of data through the edge-fog-cloud. A prototype of our proposed scheme was implemented and evaluated in a case study based on the management of electrocardiogram sensor data. The obtained results reveal the suitability and efficiency of the proposed scheme.The demand for accurate rainfall rate maps is growing ever more. This paper proposes a novel algorithm to estimate the rainfall rate map from the attenuation measurements coming from both broadcast satellite links (BSLs) and commercial microwave links (CMLs). The approach we pursue is based on an iterative procedure which extends the well-known GMZ algorithm to fuse the attenuation data coming from different links in a three-dimensional scenario, while also accounting for the virga phenomenon as a rain vertical attenuation model. We experimentally prove the convergence of the procedures, showing how the estimation error decreases for every iteration. The numerical results show that adding the BSL links to a pre-existent CML network boosts the accuracy performance of the estimated rainfall map, improving up to 50% the correlation metrics. Moreover, our algorithm is shown to be robust to errors concerning the virga parametrization, proving the possibility of obtaining good estimation performance without the need for precise and real-time estimation of the virga parameters.The expansion of the seaweed aquaculture sector along with the rapid deterioration of these products escalates the importance of implementing rapid, real-time techniques for their quality assessment. Seaweed samples originating from Scotland and Ireland were stored under various temperature conditions for specific time intervals. Microbiological analysis was performed throughout storage to assess the total viable counts (TVC), while in parallel FT-IR spectroscopy, multispectral imaging (MSI) and electronic nose (e-nose) analyses were conducted. Machine learning models (partial least square regression (PLS-R)) were developed to assess any correlations between sensor and microbiological data. Microbial counts ranged from 1.8 to 9.5 log CFU/g, while the microbial growth rate was affected by origin, harvest year and storage temperature. The models developed using FT-IR data indicated a good prediction performance on the external test dataset. The model developed by combining data from both origins resulted in satisfactory prediction performance, exhibiting enhanced robustness from being origin unaware towards microbiological population prediction. The results of the model developed with the MSI data indicated a relatively good prediction performance on the external test dataset in spite of the high RMSE values, whereas while using e-nose data from both MI and SAMS, a poor prediction performance of the model was reported.This work presents a Non-Ionizing Radiation (NIR) measurement campaign and proposes a specific measurement method for trajectography radars. This kind of radar has a high gain narrow beam antenna and emits a high power signal. Power density measurements from a C-band trajectography radar are carried out using bench equipment and a directional receiving antenna, instead of the commonly used isotropic probe. The measured power density levels are assessed for compliance test via comparison with the occupational and general public exposure limit levels of both the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Brazilian National Telecommunication Agency (Anatel). The limit for the occupational public is respected everywhere, evidencing the safe operation of the studied radar. However, the limit for the general public is exceeded at a point next to the radar's antenna, showing that preventive measures are needed.Nowadays, increasing air-pollution levels are a public health concern that affects all living beings, with the most polluting gases being present in urban environments. For this reason, this research presents portable Internet of Things (IoT) environmental monitoring devices that can be installed in vehicles and that send message queuing telemetry transport (MQTT) messages to a server, with a time series database allocated in edge computing. The visualization stage is performed in cloud computing to determine the city air-pollution concentration using three different labels low, normal, and high. To determine the environmental conditions in Ibarra, Ecuador, a data analysis scheme is used with outlier detection and supervised classification stages. In terms of relevant results, the performance percentage of the IoT nodes used to infer air quality was greater than 90%. In addition, the memory consumption was 14 Kbytes in a flash and 3 Kbytes in a RAM, reducing the power consumption and bandwidth needed in traditional air-pollution measuring stations.Recently, IQRF has emerged as a promising technology for the Internet of Things (IoT), owing to its ability to support short- and medium-range low-power communications. However, real world deployment of IQRF-based wireless sensor networks (WSNs) requires accurate path loss modelling to estimate network coverage and other performances. In the existing literature, extensive research on propagation modelling for IQRF network deployment in urban environments has not been provided yet. Therefore, this study proposes an empirical path loss model for the deployment of IQRF networks in a peer-to-peer configured system where the IQRF sensor nodes operate in the 868 MHz band. For this purpose, extensive measurement campaigns are conducted outdoor in an urban environment for Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) links. check details Furthermore, in order to evaluate the prediction accuracy of well-known empirical path loss models for urban environments, the measurements are compared with the predicted path loss values. The results show that the COST-231 Walfisch-Ikegami model has higher prediction accuracy and can be used for IQRF network planning in LoS links, while the COST-231 Hata model has better accuracy in NLoS links. On the other hand, the effects of antennas on the performance of IQRF transceivers (TRs) for LoS and NLoS links are also scrutinized. The use of IQRF TRs with a Straight-Line Dipole Antenna (SLDA) antenna is found to offer more stable results when compared to IQRF (TRs) with Meander Line Antenna (MLA) antenna. Therefore, it is believed that the findings presented in this article could offer useful insights for researchers interested in the development of IoT-based smart city applications.