A multiparadigm approach to integrate gestures and sound in the modeling framework

Vasco Amaral, Antonio Cicchetti, Romuald Deshayes

Research output: Contribution to journalArticlepeer-review


One of the essential means of supporting Human-Machine Interaction is a (software) language, exploited to input commands and receive corresponding outputs in a well-defined manner. In the past, language creation and customization used to be accessible to software developers only. But today, as software applications gain more ubiquity, these features tend to be more accessible to application users themselves. However, current language development techniques are still based on traditional concepts of human-machine interaction, i.e. manipulating text and/or diagrams by means of more or less sophisticated keypads (e.g. mouse and keyboard).

In this paper we propose to enhance the typical approach for dealing with language intensive applications by widening available human-machine interactions to multiple modalities, including sounds, gestures, and their combination. In particular, we adopt a Multi-Paradigm Modelling approach in which the forms of interaction can be specified by means of appropriate modelling techniques. The aim is to provide a more advanced human-machine interaction support for language intensive applications.

Original languageEnglish
Pages (from-to)57-66
Number of pages10
Publication statusPublished - 2013


  • Application programs
  • Man machine systems


Dive into the research topics of 'A multiparadigm approach to integrate gestures and sound in the modeling framework'. Together they form a unique fingerprint.

Cite this