Annotating human body movements in videorecordings is at the core of contemporary gesture research, allowing scientists to process video data following customized annotation schemes according to the research questions at hand. With more and more gesture researchers focusing on formal aspects of human movements, the starting point of quali-quantitative analyses is the transcription of the movements using specialized software. Notwithstanding advances in data visualization, visualizing processed data in Gesture Studies (annotations) is currently limited to tables and graphs, which present the data in quantitative and temporal terms for further analyses. Alternative ways of visualizing the data could promote alternative ways of reasoning about the research questions (Tversky 2011). This paper intends to evidence the current void in gesture research tools and present an option for how Gesture scholars can visualize their processed data in a more ”user-friendly” way. Recent efforts to incorporate the advantages of 3D coupled with new visualizations techniques afford new methods to both annotate and analyze body movements using learning algorithms (e.g. Deep 2015) to model virtual characters’ behaviors based on video corpora annotated in software such as ELAN (Brugman & Russel 2004) and ANVIL (Kipp 2012). These advances, nonetheless, are underdeveloped in the area of Gesture Studies research and could provide interesting insights both regarding human and virtual characters interaction and semi-automatic ways of annotating and validating video data (Velloso, Bulling, & Gellersen 2013). We present an example of usage, based on data from [AUTHORS] (2015), where a multiparty scene is transposed from the 2D video data to a modeled 3D environment. Avatars represent participants, and their body parts are labeled according to the formal annotation scheme used (left hand, right arm, torso, etc). Movements of the various articulators of each participant, as they were annotated using ELAN, are programed so their activation is evidenced in the 2D/3D representation of the participants’ annotation. This recreates the scene of interest, allowing a more schematic visualization compared to the original video recording, isolating and foregrounding only the focal elements and eliminating visual ”noise”. Moreover, gaze annotations are visualized: unlike in the video, where gaze can only be tracked one participant at a time, this tool allows multiparty gaze annotations to be viewed synoptically as vectors, allowing the researcher to track the group’s gaze-points simultaneously. As a computational model of annotations, statistical reports will also be available and may contributes to the reduction of incoherencies between human raters, and thus to higher value of inter-rater agreement and data reliability. A work-in-progress, this proof-of-concept prototype intends to be made available to researchers interested in visualizing formal gesture annotations with minimal setup for their own quali- quantitative research on formal aspects of body movements.
|Number of pages||1|
|Publication status||Published - 2016|
|Event||The 7th Conference of the International Society for Gesture Studies: Gesture – Creativity – Multimodality - Paris, France|
Duration: 18 Jul 2016 → 22 Jul 2016
|Conference||The 7th Conference of the International Society for Gesture Studies: Gesture – Creativity – Multimodality|
|Period||18/07/16 → 22/07/16|
- gesture research software
- visualization techniques
- 3D annotation
- data visualization tool
- Unity 3D
Ribeiro, C., Evola, V., Skubisz, J., & Anjos, R. K. D. (2016). Transposing Formal Annotations of 2D Video Data into a 3D Environment for Gesture Research. 44. Abstract from The 7th Conference of the International Society for Gesture Studies: Gesture – Creativity – Multimodality, Paris, France.