TY - GEN
T1 - Studying natural user interfaces for smart video annotation towards ubiquitous environments
AU - Rodrigues , Rui
AU - Neves Madeira, Rui
AU - Correia, Nuno
N1 - Funding Information:
info:eu-repo/grantAgreement/FCT/OE/2020.09417.BD/PT#
It was supported by the project WEAVE, Grant Agreement Number: INEA/CEF/ICT/A2020/ 2288018; and the project CultureMoves, Grant Agreement Number: INEA/CEF/ICT/A2017/ 1568369. It is also supported by NOVA LINCS Research Center, partially funded by project UID/CEC/04516/ 2020 granted by Fundação para a Ciência e Tecnologia (FCT).
Publisher Copyright:
© 2021 ACM.
PY - 2021/5/12
Y1 - 2021/5/12
N2 - Creativity and inspiration for problem-solving are critical skills in a group-based learning environment. Communication procedures have seen continuous adjustments over the years, with increased multimedia elements usage like videos to provide superior audience impact. Annotations are a valuable approach for remembering, reflecting, reasoning, and sharing thoughts on the learning process. However, it is hard to control playback flow and add potential notes during video presentations, such as in a classroom context. Teachers often need to move around the classroom to interact with the students, which leads to situations where they are physically far from the computer. Therefore, we developed a multimodal web video annotation tool that combines a voice interaction module with manual annotation capabilities for more intelligent natural interactions towards ubiquitous environments. We observed current video annotation practices and created a new set of principles to guide our research work. Natural language enables users to express their intended actions while interacting with the web video player for annotation purposes. We have developed a customized set of natural language expressions that map the user speech to specific software operations through studying and integrating new artificial intelligence techniques. Finally, the paper presents positive results gathered from a user study conducted to evaluate our solution.
AB - Creativity and inspiration for problem-solving are critical skills in a group-based learning environment. Communication procedures have seen continuous adjustments over the years, with increased multimedia elements usage like videos to provide superior audience impact. Annotations are a valuable approach for remembering, reflecting, reasoning, and sharing thoughts on the learning process. However, it is hard to control playback flow and add potential notes during video presentations, such as in a classroom context. Teachers often need to move around the classroom to interact with the students, which leads to situations where they are physically far from the computer. Therefore, we developed a multimodal web video annotation tool that combines a voice interaction module with manual annotation capabilities for more intelligent natural interactions towards ubiquitous environments. We observed current video annotation practices and created a new set of principles to guide our research work. Natural language enables users to express their intended actions while interacting with the web video player for annotation purposes. We have developed a customized set of natural language expressions that map the user speech to specific software operations through studying and integrating new artificial intelligence techniques. Finally, the paper presents positive results gathered from a user study conducted to evaluate our solution.
KW - AI-Based Tools
KW - HCI in Ubiquitous Environments
KW - Multimodal Interfaces
KW - Natural Language Processing
KW - Speech Interfaces
KW - Video Annotation
UR - http://www.scopus.com/inward/record.url?scp=85125834026&partnerID=8YFLogxK
U2 - 10.1145/3490632.3490672
DO - 10.1145/3490632.3490672
M3 - Conference contribution
AN - SCOPUS:85125834026
T3 - ACM International Conference Proceeding Series
SP - 158
EP - 168
BT - Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia, MUM 2021
PB - ACM - Association for Computing Machinery
T2 - 20th International Conference on Mobile and Ubiquitous Multimedia, MUM 2021
Y2 - 5 December 2021 through 8 December 2021
ER -