Bigumowen1604

Z Iurium Wiki

Verze z 1. 10. 2024, 16:00, kterou vytvořil Bigumowen1604 (diskuse | příspěvky) (Založena nová stránka s textem „Furthermore, the results also reveal that the proposed approach offers a more tractable and higher quality (or competitive) solution in comparison with exi…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Furthermore, the results also reveal that the proposed approach offers a more tractable and higher quality (or competitive) solution in comparison with existing attention-based models, stochastic heuristic approach, and standard mixed-integer programming solver under the given experimental conditions. Finally, the different experimental evaluations reveal that the proposed data generation approach for training the model is highly effective.Session-based recommendation tries to make use of anonymous session data to deliver high-quality recommendations under the condition that user profiles and the complete historical behavioral data of a target user are unavailable. Previous works consider each session individually and try to capture user interests within a session. Despite their encouraging results, these models can only perceive intra-session items and cannot draw upon the massive historical relational information. To solve this problem, we propose a novel method named global graph guided session-based recommendation (G^3SR). G^3SR decomposes the session-based recommendation workflow into two steps. First, a global graph is built upon all session data, from which the global item representations are learned in an unsupervised manner. Then, these representations are refined on session graphs under the graph networks, and a readout function is used to generate session representations for each session. Extensive experiments on two real-world benchmark datasets show remarkable and consistent improvements of the G^3SR method over the state-of-the-art methods, especially for cold items.Chemical species tomography (CST) has been widely used for in situ imaging of critical parameters, e.g., species concentration and temperature, in reactive flows. However, even with state-of-the-art computational algorithms, the method is limited due to the inherently ill-posed and rank-deficient tomographic data inversion and by high computational cost. These issues hinder its application for real-time flow diagnosis. To address them, we present here a novel convolutional neural network, namely CSTNet, for high-fidelity, rapid, and simultaneous imaging of species concentration and temperature using CST. CSTNet introduces a shared feature extractor that incorporates the CST measurements and sensor layout into the learning network. In addition, a dual-branch decoder with internal crosstalk, which automatically learns the naturally correlated distributions of species concentration and temperature, is proposed for image reconstructions. The proposed CSTNet is validated both with simulated datasets and with measured data from real flames in experiments using an industry-oriented sensor. Superior performance is found relative to previous approaches in terms of reconstruction accuracy and robustness to measurement noise. This is the first time, to the best of our knowledge, that a deep learning-based method for CST has been experimentally validated for simultaneous imaging of multiple critical parameters in reactive flows using a low-complexity optical sensor with a severely limited number of laser beams.The human ankle joint interacts with the environment during ambulation to provide mobility and maintain stability. This association changes depending on the different gait patterns of day-to-day life. In this study, we investigated this interaction and extracted kinematic information to classify human walking mode into upstairs, downstairs, treadmill, overground and stationary in real-time using a single-DoF IMU axis. selleck kinase inhibitor The proposed algorithm's uniqueness is twofold - it encompasses components of the ankle's biomechanics and subject-specificity through the extraction of inherent walking attributes and user calibration. The performance analysis with forty healthy participants (mean age 26.8 ± 5.6 years yielded an accuracy of 89.57% and 87.55% in the left and right sensors, respectively. The study, also, portrays the implementation of heuristics to combine predictions from sensors at both feet to yield a single conclusive decision with better performance measures. The simplicity yet reliability of the algorithm in healthy participants and the observation of inherent multimodal walking features, similar to young adults, in elderly participants through a case study, demonstrate our proposed algorithm's potential as a high-level automatic switching framework in robotic gait interventions for multimodal walking.Due to the high robustness to artifacts, steady-state visual evoked potential (SSVEP) has been widely applied to construct high-speed brain-computer interfaces (BCIs). Thus far, many spatial filtering methods have been proposed to enhance the target identification performance for SSVEP-based BCIs, and task-related component analysis (TRCA) is among the most effective ones. In this paper, we further extend TRCA and propose a new method called Latency Aligning TRCA (LA-TRCA), which aligns visual latencies on channels to obtain accurate phase information from task-related signals. Based on the SSVEP wave propagation theory, SSVEP spreads from posterior occipital areas over the cortex with a fixed phase velocity. Via estimation of the phase velocity using phase shifts of channels, the visual latencies on different channels can be determined for inter-channel alignment. TRCA is then applied to aligned data epochs for target recognition. For the validation purpose, the classification performance comparison between the proposed LA-TRCA and TRCA-based expansions were performed on two different SSVEP datasets. The experimental results illustrated that the proposed LA-TRCA method outperformed the other TRCA-based expansions, which thus demonstrated the effectiveness of the proposed approach for enhancing the SSVEP detection performance.Electroencephalogram (EEG) electrodes are critical devices for brain-computer interface and neurofeedback. A pre-gelled (PreG) electrode was developed in this paper for EEG signal acquisition with a short installation time and good comfort. A hydrogel probe was placed in advance on the Ag/AgCl electrode before wearing the EEG headband instead of a time-consuming gel injection after wearing the headband. The impedance characteristics were compared between the PreG electrode and the wet electrode. The PreG electrode and the wet electrode performed the Brain-Computer Interface (BCI) application experiment to evaluate their performance. The average impedance of the PreG electrode can be decreased to 43 [Formula see text] or even lower, which is higher than the wet electrode with an impedance of 8 [Formula see text]. However, there is no significant difference in classification accuracy and information transmission rate (ITR) between the PreG electrode and the wet electrode in a 40 target BCI system based on Steady State Visually Evoked Potential (SSVEP). This study validated the efficiency of the proposed PreG electrode in the SSVEP-based BCI. The proposed PreG electrode will be an excellent substitute for wet electrodes in an actual application with convenience and good comfort.Evaluation of position sense post-stroke is essential for rehabilitation. Position sense may be an output of a process needing position information, external torque, and the sense of effort. Even for healthy individuals, it is unclear whether external torque affects position sense. Thus, evaluation of position sense under different external torques in clinical settings is strongly needed. However, simple devices for measuring position sense under different external torques in clinical settings are lacking. Technologically advanced devices that may evaluate the elbow position sense under different torques were reported to be infeasible clinically because of device complexity and the need for technical experts when analyzing data. To address the unmet need, in this study, a simple and light elbow position sense measurement device was developed that allows clinicians to measure elbow position sense under different external torques in the form of position matching error objectively without any technical difficulties. The feasibility of the device, including intra-session intra-rater reliability and test-retest reliability over two consecutive days, was verified to be clinically applicable using tests with 25 healthy subjects. Thanks to its ease of use, high reliability, and ease of data analysis, it is expected that the device can help to evaluate the position sense post-stroke comprehensively.Extracting concise 3D curve skeletons with existing methods is still a serious challenge as these methods require tedious parameter adjustment to suppress the influence of shape boundary perturbations to avoid spurious branches. In this paper, we address this challenge by enhancing the capture of prominent features and using them for skeleton extraction, motivated by the observation that the shape is mainly represented by prominent features. Our method takes the medial mesh of the shape as input, which can maintain the shape topology well. We develop a series of novel measures for simplifying and contracting the medial mesh to capture prominent features and represent them concisely, by which means the influences of shape boundary perturbations on skeleton extraction are suppressed and the quantity of data needed for skeleton extraction is significantly reduced. As a result, we can robustly and concisely extract the curve skeleton based on prominent features, avoiding the trouble of tuning parameters and saving computations, as shown by experimental results.Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work. R-PointHop first determines a local reference frame (LRF) for every point using its nearest neighbors and finds local attributes. Next, R-PointHop obtains local-to-global hierarchical features by point downsampling, neighborhood expansion, attribute construction and dimensionality reduction steps. Thus, point correspondences are built in hierarchical feature space using the nearest neighbor rule. Afterwards, a subset of salient points with good correspondence is selected to estimate the 3D transformation. The use of the LRF allows for invariance of the hierarchical features of points with respect to rotation and translation, thus making R-PointHop more robust at building point correspondence, even when the rotation angles are large. Experiments are conducted on the 3DMatch, ModelNet40, and Stanford Bunny datasets, which demonstrate the effectiveness of R-PointHop for 3D point cloud registration. R-PointHop's model size and training time are an order of magnitude smaller than those of deep learning methods, and its registration errors are smaller, making it a green and accurate solution. Our codes are available on GitHub (https//github.com/pranavkdm/R-PointHop).At present, and increasingly so in the future, much of the captured visual content will not be seen by humans. Instead, it will be used for automated machine vision analytics and may require occasional human viewing. Examples of such applications include traffic monitoring, visual surveillance, autonomous navigation, and industrial machine vision. To address such requirements, we develop an end-to-end learned image codec whose latent space is designed to support scalability from simpler to more complicated tasks. The simplest task is assigned to a subset of the latent space (the base layer), while more complicated tasks make use of additional subsets of the latent space, i.e., both the base and enhancement layer(s). For the experiments, we establish a 2-layer and a 3-layer model, each of which offers input reconstruction for human vision, plus machine vision task(s), and compare them with relevant benchmarks. The experiments show that our scalable codecs offer 37%-80% bitrate savings on machine vision tasks compared to best alternatives, while being comparable to state-of-the-art image codecs in terms of input reconstruction.

Autoři článku: Bigumowen1604 (Malling Appel)