One of the essential means of supporting Human-Machine Interaction is a (software) language, exploited to input commands and receive corresponding outputs in a well-defined manner. In the past, language creation and customization used to be accessible to software developers only. But today, as software applications gain more ubiquity, these features tend to be more accessible to application users themselves. However, current language development techniques are still based on traditional concepts of human-machine interaction, i.e. manipulating text and/or diagrams by means of more or less sophisticated keypads (e.g. mouse and keyboard).
In this paper we propose to enhance the typical approach for dealing with language intensive applications by widening available human-machine interactions to multiple modalities, including sounds, gestures, and their combination. In particular, we adopt a Multi-Paradigm Modelling approach in which the forms of interaction can be specified by means of appropriate modelling techniques. The aim is to provide a more advanced human-machine interaction support for language intensive applications.
|Number of pages||10|
|Publication status||Published - 2013|
- Application programs
- Man machine systems