Towards a modelling workbench with flexible interaction models for model editors operating through voice and gestures

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

Model-Driven Engineering (MDE) has emerged as a methodology, grounded in theory and tooling, to design and develop software systems with models at their core. MDE makes use of modelling languages, ranging from general-purpose like the standard UML, to dedicated modelling languages like Domain-Specific Modelling Languages. Those are supported by modelling editors, with languages that typically offer graphical and textual notations as concrete syntax. However, being concentrated on vision, these tools ignore other senses and communication channels like audition (voice and sound) that could be used in industrial settings, for accessibility purposes, or simply as complementary to the visual approaches. We are building a modelling workbench platform that, similarly to modelling workbenches dealing with diagrammatic languages, allows a software language engineer to model a domain-specific language and generate a voice/audio editor where the domain end-users can operate (Create, Read, Update and Delete), and navigate diagrams through speech recognition and voice synthesis tools. One of the problems of editors that use voice is the fixed interaction paradigms that contribute to poor user experience. In this paper, we propose an interaction mechanism that can recognise vocal and non-vocal sounds as well as gestures adding two senses in the definition of Domain-Specific Languages’ concrete syntax definition, not usually explored in the model-driven tools. we have built a prototype and we carried out a pilot empirical study, with preliminary positive results, to access the presented prototype in terms of usability, productivity and learning curve.

Original languageEnglish
Title of host publicationProceedings - 2021 IEEE 45th Annual Computers, Software, and Applications Conference, COMPSAC 2021
EditorsW. K. Chan, Bill Claycomb, Hiroki Takakura, Ji-Jiang Yang, Yuuichi Teranishi, Dave Towey, Sergio Segura, Hossain Shahriar, Sorel Reisman, Sheikh Iqbal Ahamed
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1026-1031
Number of pages6
ISBN (Electronic)9781665424639
DOIs
Publication statusPublished - Jul 2021
Event45th IEEE Annual Computers, Software, and Applications Conference, COMPSAC 2021 - Virtual, Online, Spain
Duration: 12 Jul 202116 Jul 2021

Conference

Conference45th IEEE Annual Computers, Software, and Applications Conference, COMPSAC 2021
Country/TerritorySpain
CityVirtual, Online
Period12/07/2116/07/21

Keywords

  • Interaction model
  • Model-driven software engineering
  • Modelling workbench
  • Sound modelling editors
  • Voice

Fingerprint

Dive into the research topics of 'Towards a modelling workbench with flexible interaction models for model editors operating through voice and gestures'. Together they form a unique fingerprint.

Cite this