Abildgaardhood5528
Tracking the eye of a blind patient can enhance the usability of an artificial vision system. In systems where the sensing element, i.e. the scene camera that captures the visual information, is mounted on the patient's head, the user must use head scanning in order to steer the line of sight of the implant to the region of interest. Integrating an eye tracker in the prosthesis will enable scanning using eye movements. The eye position will set the region of interest within the wide field-of-view of the scene camera. An essential requirement of an eye tracker is the need to calibrate it. Obviously, off-the-shelf calibration methods that require looking at known points in space cannot be used with blind users.Here we tested the feasibility of calibrating the eye-tracker based on pupil position and the location of the percept reported by the implant recipient, using a handheld marker. Pupil positions were extracted using custom image processing in a field-programmable-gate-array built into a glasses-mounted eye tracker. In the calibration process, electrodes were directly stimulated and the subject reported the location of the percept using a handheld marker. Linear regression was used to extract the transfer function from pupil position to gaze direction in the coordinates of the scene camera.In using the eye tracker with the proposed calibration method, patients demonstrated improved precision on a localization task with corresponding reduction of head movements.Vestibular perception is useful to maintain heading direction and successful spatial navigation. selleck chemicals llc In this study, we present a novel equipment capable of delivering both rotational and translational movements, namely the RT-Chair. The system comprises two motors and it is controlled by the user via MATLAB. To validate the measurability of vestibular perception with the RT-chair, we ran a threshold measurement experiment with healthy participants. Our results show thresholds comparable to previous literature, thus confirming the validity of the system to measure vestibular perception.Reverberation reduces speech quality, and therefore causes inconveniency to listeners, especially those using assistive hearing devices. To enhance the quality of reverberant speech, a significant step is speech quality assessment, most of which are based on subjective judgements. Subjective evaluations vary with listeners' perception, emotional and mental states. To obtain an objective assessment of speech quality in reverberation, this work carried out an event related potential (ERP) study using a passive oddball paradigm. Listeners were presented with anechoic speech as standard stimuli mixed with reverberant speech under different levels of reverberation as deviant stimuli. The ERP responses reveal how human-beings' subconsciousness interacts with different levels of reverberation in the perceived speech. Results showed that the peak amplitude of P300 in ERP responses followed the variation of reverberation time in reverberant speech, providing evidence that P300 in ERP responses could work as a neural surrogate of reverberation time in objective speech quality assessment.Designing prosthetic hands for children is challenging due to the limited space for electronics and the need of reducing the cost to cater for the constant growth of their hand. In this paper, we proposed an anthropomorphic hand prosthesis for children, using monolithic design and 3D printing of soft/compliant materials. The use of monolithic soft robotic structure provides a lightweight and compact design required in paediatric hand prostheses. The use of 3D printing also allows fabrication of customised products manufactured at low volumes in a cost-effective way which is of interest in prosthetic hand for children. The proposed hand/arm design has a total weight of 230gr including battery and actuation and control systems and a size similar to the biological hand of 5-7 years old children. The hand can provide two grasp types pinch/tripod and power (cylindrical and spherical) and controlled by using two surface electromyography electrodes. The capability of the proposed hand prosthesis is demonstrated through grasping objects with different shapes and sizes.The Electromyography-based Pattern-Recognition (EMG-PR) framework has been investigated for almost three decades towards developing an intuitive myoelectric prosthesis. To utilize the knowledge of the underlying neurophysiological processes of natural movements, the concept of muscle synergy has been applied in prosthesis control and proved to be of great potential recently. For a muscle-synergy-based myoelectric system, the variation of muscle contraction force is also a confounding factor. This study evaluates the robustness of muscle synergies under a variant force level for forearm movements. Six channels of forearm surface EMG were recorded from six healthy subjects when they performed 4 movements (hand open, hand close, wrist flexion, and wrist extension) using low, moderate, and high force, respectively. Muscle synergies were extracted from the EMG using the alternating nonnegativity constrained least squares and active set (NNLS) algorithm. Three analytic strategies were adopted to examine whether muscle synergies were conserved under different force levels. Our results consistently showed that there exists fixed, robust muscle synergies across force levels. This outcome would provide valuable insights to the implementation of muscle- synergy-based assistive technology for the upper extremity.Electromyogram (EMG) pattern recognition has been utilized with the traditional machine and deep learning architectures as a control strategy for upper-limb prostheses. However, most of these learning architectures, including those in convolutional neural networks, focus the spatial correlations only; but muscle contractions have a strong temporal dependency. Our primary aim in this paper is to investigate the effectiveness of recurrent deep learning networks in EMG classification as they can learn long-term and non-linear dynamics of time series. We used a Long Short-Term Memory (LSTM-based) neural network to perform multiclass classification with six grip gestures at three different force levels (low, medium, and high) generated by nine amputees. Four different feature sets were extracted from the raw signals and fed to LSTM. Moreover, to investigate a generalization of the proposed method, three different training approaches were tested including 1) training the network with feature extracted from one specific force level and testing it with the same force level, 2) training the network with one specific force level and testing it with two remained force levels, and 3) training the network with all of the force levels and testing it with a single force level.