Prestondillon9865
To demonstrate the values of this study, a case analysis to illustrate the human-computer interaction and pallet pooling operations is conducted. Overall, this study standardises the decentralised pallet management in the closed-loop mechanism, resulting in a constructive impact to sustainable development in the logistics industry.Human activity recognition is an extensively researched topic in the last decade. Recent methods employ supervised and unsupervised deep learning techniques in which spatial and temporal dependency is modeled. compound library chemical This paper proposes a novel approach for human activity recognition using skeleton data. The method combines supervised and unsupervised learning algorithms in order to provide qualitative results and performance in real time. The proposed method involves a two-stage framework the first stage applies an unsupervised clustering technique to group up activities based on their similarity, while the second stage classifies data assigned to each group using graph convolutional networks. Different clustering techniques and data augmentation strategies are explored for improving the training process. The results were compared against the state of the art methods and the proposed model achieved 90.22% Top-1 accuracy performance for NTU-RGB+D dataset (the performance was increased by approximately 9% compared with the baseline graph convolutional method). Moreover, inference time and total number of parameters stay within the same magnitude order. Extending the initial set of activities with additional classes is fast and robust, since there is no required retraining of the entire architecture but only to retrain the cluster to which the activity is assigned.The orientation of a magneto-inertial measurement unit can be estimated using a sensor fusion algorithm (SFA). However, orientation accuracy is greatly affected by the choice of the SFA parameter values which represents one of the most critical steps. A commonly adopted approach is to fine-tune parameter values to minimize the difference between estimated and true orientation. However, this can only be implemented within the laboratory setting by requiring the use of a concurrent gold-standard technology. To overcome this limitation, a Rigid-Constraint Method (RCM) was proposed to estimate suboptimal parameter values without relying on any orientation reference. The RCM method effectiveness was successfully tested on a single-parameter SFA, with an average error increase with respect to the optimal of 1.5 deg. In this work, the applicability of the RCM was evaluated on 10 popular SFAs with multiple parameters under different experimental scenarios. The average residual between the optimal and suboptimal errors amounted to 0.6 deg with a maximum of 3.7 deg. These encouraging results suggest the possibility to properly tune a generic SFA on different scenarios without using any reference. The synchronized dataset also including the optical data and the SFA codes are available online.Both Respiratory Flow (RF) and Respiratory Motion (RM) are visible in thermal recordings of infants. Monitoring these two signals usually requires landmark detection for the selection of a region of interest. Other approaches combine respiratory signals coming from both RF and RM, obtaining a Mixed Respiratory (MR) signal. The detection and classification of apneas, particularly common in preterm infants with low birth weight, would benefit from monitoring both RF and RM, or MR, signals. Therefore, we propose in this work an automatic RF pixel detector not based on facial/body landmarks. The method is based on the property of RF pixels in thermal videos, which are in areas with a smooth circular gradient. We defined 5 features combined with the use of a bank of Gabor filters that together allow selection of the RF pixels. The algorithm was tested on thermal recordings of 9 infants amounting to a total of 132 min acquired in a neonatal ward. On average the percentage of correctly identified RF pixels was 84%. Obstructive Apneas (OAs) were simulated as a proof of concept to prove the advantage in monitoring the RF signal compared to the MR signal. The sensitivity in the simulated OA detection improved for the RF signal reaching 73% against the 23% of the MR signal. Overall, the method yielded promising results, although the positioning and number of cameras used could be further optimized for optimal RF visibility.In order to meet the assist requirements of extravehicular activity (EVA) for astronauts, such as moving outside the international space station (ISS) or performing on-orbit tasks by a single astronaut, this paper proposes an astronaut robotic limbs system (AstroLimbs) for extravehicular activities assistance. This system has two robotic limbs that can be fixed on the backpack of the astronaut. Each limb is composed of several basic module units with identical structure and function, which makes it modularized and reconfigurable. The robotic limbs can work as extra arms of the astronaut to assist them outside the space station cabin. In this paper, the robotic limbs are designed and developed. The reinforcement learning method is introduced to achieve autonomous motion planning capacity for the robot, which makes the robot intelligent enough to assist the astronaut in unstructured environment. In the meantime, the movement of the robot is also planned to make it move smoothly. The structure scene of the ISS for extravehicular activities is modeled in a simulation environment, which verified the effectiveness of the proposed method.Engineering education benefits from the application of modern technology, allowing students to learn essential Science, Technology, Engineering, and Mathematics (STEM) related concepts through hands-on experiences. Robotic kits have been used as an innovative tool in some educational fields, being readily accepted and adopted. However, most of the time, such kits' knowledge level requires understanding basic concepts that are not always appropriate for the student. A critical concept in engineering is the Cartesian Coordinate System (CCS), an essential tool for every engineering, from graphing functions to data analysis in robotics and control applications and beyond. This paper presents the design and implementation of a novel Two-Dimensional Cartesian Coordinate System Educational Toolkit (2D-CACSET) to teach the two-dimensional representations as the first step to construct spatial thinking. This innovative educational toolkit is based on real-time location systems using Ultra-Wide Band technology. It comprises a workbench, four Anchors pinpointing X+, X-, Y+, Y- axes, seven Tags representing points in the plane, one listener connected to a PC collecting the position of the Tags, and a Graphical User Interface displaying these positions. The Educational Mechatronics Conceptual Framework (EMCF) enables constructing knowledge in concrete, graphic, and abstract levels. Hence, the students acquire this knowledge to apply it further down their career path. For this paper, three instructional designs were designed using the 2D-CACSET and the EMCF to learn about coordinate axes, quadrants, and a point in the CCS.Development boards, Single-Board Computers (SBCs) and Single-Board Microcontrollers (SBMs) integrating sensors and communication technologies have become a very popular and interesting solution in the last decade. They are of interest for their simplicity, versatility, adaptability, ease of use and prototyping, which allow them to serve as a starting point for projects and as reference for all kinds of designs. In this sense, there are innumerable applications integrating sensors and communication technologies where they are increasingly used, including robotics, domotics, testing and measurement, Do-It-Yourself (DIY) projects, Internet of Things (IoT) devices in the home or workplace and science, technology, engineering, educational and also academic world for STEAM (Science, Technology, Engineering and Mathematics) skills. The interest in single-board architectures and their applications have caused that all electronics manufacturers currently develop low-cost single board platform solutions. In this paper we realized an analysis of the most important topics related with single-board architectures integrating sensors. We analyze the most popular platforms based on characteristics as cost, processing capacity, integrated processing technology and open-source license, as well as power consumption (mA@V), reliability (%), programming flexibility, support availability and electronics utilities. For evaluation, an experimental framework has been designed and implemented with six sensors (temperature, humidity, CO2/TVOC, pressure, ambient light and CO) and different data storage and monitoring options locally on a μSD (Micro Secure Digital), on a Cloud Server, on a Web Server or on a Mobile Application.A major challenge with current wearable electronics and e-textiles, including sensors, is power supply. As an alternative to batteries, energy can be harvested from various sources using garments or other textile products as a substrate. Four different energy-harvesting mechanisms relevant to smart textiles are described in this review. Photovoltaic energy harvesting technologies relevant to textile applications include the use of high efficiency flexible inorganic films, printable organic films, dye-sensitized solar cells, and photovoltaic fibers and filaments. In terms of piezoelectric systems, this article covers polymers, composites/nanocomposites, and piezoelectric nanogenerators. The latest developments for textile triboelectric energy harvesting comprise films/coatings, fibers/textiles, and triboelectric nanogenerators. Finally, thermoelectric energy harvesting applied to textiles can rely on inorganic and organic thermoelectric modules. The article ends with perspectives on the current challenges and possible strategies for further progress.Altitude estimation is one of the fundamental tasks of unmanned aerial vehicle (UAV) automatic navigation, where it aims to accurately and robustly estimate the relative altitude between the UAV and specific areas. However, most methods rely on auxiliary signal reception or expensive equipment, which are not always available, or applicable owing to signal interference, cost or power-consuming limitations in real application scenarios. In addition, fixed-wing UAVs have more complex kinematic models than vertical take-off and landing UAVs. Therefore, an altitude estimation method which can be robustly applied in a GPS denied environment for fixed-wing UAVs must be considered. In this paper, we present a method for high-precision altitude estimation that combines the vision information from a monocular camera and poses information from the inertial measurement unit (IMU) through a novel end-to-end deep neural network architecture. Our method has numerous advantages over existing approaches. First, we utilize the visual-inertial information and physics-based reasoning to build an ideal altitude model that provides general applicability and data efficiency for neural network learning. A further advantage is that we have designed a novel feature fusion module to simplify the tedious manual calibration and synchronization of the camera and IMU, which are required for the standard visual or visual-inertial methods to obtain the data association for altitude estimation modeling. Finally, the proposed method was evaluated, and validated using real flight data obtained during a fixed-wing UAV landing phase. The results show the average estimation error of our method is less than 3% of the actual altitude, which vastly improves the altitude estimation accuracy compared to other visual and visual-inertial based methods.