MAGI - Mind Augmented Gesture Interaction

The aim of MAGI project is to study always-available gesture recognition using physiological sensors and, in particular, sEMG (i.e., electrical activity of muscles) and EEG (i.e., electrical activity of the brain). In order to make the interaction “always-available”, this work is mainly based on three research axes: 1) Recognition of subtle (i.e., non-tiring, private, etc.) gestures; 2) Gesture segmentation (Recognizing when a gesture starts and ends, Understanding when a gesture is a command gesture directed to the machine or a non-command gesture, such as gesticulation, that the machine has to ignore); 3) Activity recognition (understanding the user’s current activity could allow adapting the interface following his/her needs).


gesture recognition, context awareness, multimodal interaction, human-computer interaction, human-environment interaction, psycho-physiological sensors, eletroencephalography, electromyography, machine learning techniques.


Demonstrators following main axes of the project were developed within the framework of related projects: Virtual Move, GERBIL, MUDACO, Emotiv & Aphrodite. More information on the demonstrators and related projects can be found on the website of the project in the sections "Demo" and "Related Projects".

Website of the project

Project Information

Ongoing project, PhD thesis, duration 3 years (2009 - 2012).