Bidstruphodges2671

Z Iurium Wiki

Verze z 20. 9. 2024, 19:24, kterou vytvořil Bidstruphodges2671 (diskuse | příspěvky) (Založena nová stránka s textem „Stroke is the leading cause of severe disability in adults resulting in mobility, balance, and coordination deficits. Robotic exoskeletons (REs) for stroke…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Stroke is the leading cause of severe disability in adults resulting in mobility, balance, and coordination deficits. Robotic exoskeletons (REs) for stroke rehabilitation can provide the user with consistent, high dose repetition of movement, as well as balance and stability. The goal of this intervention study is to evaluate the ability of a RE to provide high dose gait therapy and the resulting effect on functional recovery for individuals with acute stroke. The investigation included a total of 44 participants. Twenty-two participants received RE gait training during inpatient rehabilitation (RE+SOC Group), and a matched sample of 22 individuals admitted to the same inpatient rehabilitation facility-receiving conventional standard of care treatment (SOC group). UGT8-IN-1 cell line The effect of RE training was quantified using total distance walked during inpatient rehabilitation and functional independence measure (FIM). The total distance walked during inpatient rehabilitation showed a significant difference between the SOC and RE+SOC groups. RE+SOC walked twice the distance as SOC during the same duration (time spent in inpatient rehabilitation) of training. In addition, the average change in motor FIM showed a significant difference between the SOC and RE+SOC groups, where the average difference in motor FIM was higher in RE+SOC compared to the SOC group. The results suggest that RE provided increased dosing of gait training without increasing the duration of training during acute stroke rehabilitation. The RE+SOC group increased their motor FIM score (change from admission to discharge) compared to SOC group, both groups were matched for admission motor FIM scores suggesting that increased dosing may have improved motor function.The elderly population has rapidly increased in past years, bringing huge demands for elderly serving devices, especially for those with mobility impairment. Present assistant walkers designed for elderly users are primitive with limited user interactivity and intelligence. We propose a novel smart robotic walker that targets a convenient-to-use indoor walking aid for the elderly. The walker supports multiple modes of interactions through voice, gait or haptic touch, and allows intelligent control via learning-based methods to achieve mobility safety. Our design enables a flexible, initiative and reliable walker due to the following (1) we take a hybrid approach by combining the conventional mobile robotic platform with the existing rollator design, to achieve a novel robotic system that fulfills expected functionalities; (2) our walker tracks users in front by detecting lower limb gait, while providing close-proximity walking safety support; (3) our walker can detect human intentions and predict emergency events, e.g., falling, by monitoring force pressure on a specially designed soft-robotic interface on the handle; (4) our walker performs reinforcement learning-based sound source localization to locate and navigate to the user based on his/her voice signals. Experiment results demonstrate the sturdy mechanical structure, the reliability of multiple novel interactions, and the efficiency of the intelligent control algorithms implemented. The demonstration video is available at https//sites.google.com/view/smart-walker-hku.Quantifying rat behavior through video surveillance is crucial for medicine, neuroscience, and other fields. In this paper, we focus on the challenging problem of estimating landmark points, such as the rat's eyes and joints, only with image processing and quantify the motion behavior of the rat. Firstly, we placed the rat on a special running machine and used a high frame rate camera to capture its motion. Secondly, we designed the cascade convolution network (CCN) and cascade hourglass network (CHN), which are two structures to extract features of the images. Three coordinate calculation methods-fully connected regression (FCR), heatmap maximum position (HMP), and heatmap integral regression (HIR)-were used to locate the coordinates of the landmark points. Thirdly, through a strict normalized evaluation criterion, we analyzed the accuracy of the different structures and coordinate calculation methods for rat landmark point estimation in various feature map sizes. The results demonstrated that the CCN structure with the HIR method achieved the highest estimation accuracy of 75%, which is sufficient to accurately track and quantify rat joint motion.Understanding why deep neural networks and machine learning algorithms act as they do is a difficult endeavor. Neuroscientists are faced with similar problems. One way biologists address this issue is by closely observing behavior while recording neurons or manipulating brain circuits. This has been called neuroethology. In a similar way, neurorobotics can be used to explain how neural network activity leads to behavior. In real world settings, neurorobots have been shown to perform behaviors analogous to animals. Moreover, a neuroroboticist has total control over the network, and by analyzing different neural groups or studying the effect of network perturbations (e.g., simulated lesions), they may be able to explain how the robot's behavior arises from artificial brain activity. In this paper, we review neurorobot experiments by focusing on how the robot's behavior leads to a qualitative and quantitative explanation of neural activity, and vice versa, that is, how neural activity leads to behavior. We suggest that using neurorobots as a form of computational neuroethology can be a powerful methodology for understanding neuroscience, as well as for artificial intelligence and machine learning.Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extraction benefits of CNNs, while maintaining low latency and power overhead thanks to their asynchronous spiking event-based processing.

Autoři článku: Bidstruphodges2671 (Rindom Hermann)