Requirements are typically modelled using some graphical notation with the aid of CASE tools. However, these tools make this activity exclusive to requirements engineers and stakeholders with no accessibility problems, but disabled professionals often have accessibility problems to use tools to build requirements models. Also, accessibility problems can happen to stakeholders in general in a particular context where the use of voice is more convenient (e.g., in a public transport). In this paper, the VoiceToModel framework is proposed to improve the accessibility of the requirements process by effectively integrating a requirements engineer or stakeholder with disabilities during requirements modelling. Through speech recognition, the engineer should be capable of voicing the requirements, which are automatically used to generate requirements models, more specifically goal-oriented models, object models and feature models. Also, the automated framework generates requirements models from speech recognition by firstly defining a grammar where we have the commands to create and modify a model. These commands return a feedback via voice. We show the applicability of the framework through an example and discuss the results of an experiment with 14 participants, including 2 blind people.
|Title of host publication||30TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, VOLS I AND II|
|Number of pages||8|
|Publication status||Published - Apr 2015|
|Event||30th Annual ACM Symposium on Applied Computing, SAC 2015 - Salamanca, Spain|
Duration: 13 Apr 2015 → 17 Apr 2015
|Conference||30th Annual ACM Symposium on Applied Computing, SAC 2015|
|Period||13/04/15 → 17/04/15|