Deep learning multimodal fNIRS and EEG signals for bimanual grip force decoding

نویسندگان

چکیده

Abstract Objective. Non-invasive brain-machine interfaces (BMIs) offer an alternative, safe and accessible way to interact with the environment. To enable meaningful stable physical interactions, BMIs need decode forces. Although previously addressed in unimanual case, controlling forces from both hands would BMI-users perform a greater range of interactions. We here investigate decoding hand-specific Approach. maximise cortical information by using electroencephalography (EEG) functional near-infrared spectroscopy (fNIRS) developing deep-learning architecture attention residual layers ( cnnatt ) improve their fusion. Our task required participants generate force profiles on which we trained tested our linear decoders. Main results. The use EEG fNIRS improved bimanual models outperformed model. In cases, greatest gain performance was due detection generation. particular, better for right dominant hand at fusing fNIRS. Consequently, study revealed that each were differently encoded level. Cnnatt also traces activity being modulated level not found models. Significance. results can be applied avoid hand-cross talk during robustness BMI robotic devices. fusion signals interpretability are valuable motor rehabilitation assessment.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Grip force coordination during bimanual tasks in unilateral cerebral palsy.

AIM The aim of the study was to investigate coordination of fingertip forces during an asymmetrical bimanual task in children with unilateral cerebral palsy (CP). METHOD Twelve participants (six males, six females; mean age 14y 4mo, SD 3.3y; range 9-20y;) with unilateral CP (eight right-sided, four left-sided) and 15 age-matched typically developing participants (five males, 10 females; mean ...

متن کامل

Deep learning with convolutional neural networks for EEG decoding and visualization

Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvN...

متن کامل

Title: A Computing Environment for Multimodal Integration of EEG and fNIRS

Authors: Randall L. Barbour (SUNY Downstate Medical Center, Brooklyn, NY); Arne Ewald (NIRx Medizintechnik GmbH, Berlin, [email protected]); Harry L. Graber (SUNY Downstate Medical Center); J. David Nichols (Source Signal Imaging, Inc., San Diego, CA); Mark E. Pflieger (Source Signal Imaging); Alex Ossadtchi (Source Signal Imaging); Christoph H. Schmitz (NIRx Medizintechnik GmbH); Yong Xu (SUN...

متن کامل

Multimodal Deep Learning Library

The Neural Network is a directed graph consists of multiple layers of neurons, which is also referred to as units. In general there is no connection between units of the same layer and there are only connections between adjacent layers. The first layer is the input and is referred to as visible layer v. Above the visible layer there are multiple hidden layers {h1, h2, ..., hn}. And the output o...

متن کامل

Multimodal Deep Learning

Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstra...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Neural Engineering

سال: 2021

ISSN: ['1741-2560', '1741-2552']

DOI: https://doi.org/10.1088/1741-2552/ac1ab3