This paper investigates an approach for fast hybrid human and machine video subtitling based on lattice disambiguation and posterior model adaptation. The approach aims at correcting Automatic Speech Recognition (ASR) transcriptions requiring minimal effort from the user and facilitating user corrections from smart-phone devices. Our approach is based on three key concepts. Firstly, only a portion of the data is sent to the user for correction. Secondly, user action is limited to selecting from a fixed set of options extracted from the ASR word lattice. Thirdly, user feedback is used to update the ASR parameters and further enhance performance. To investigate the potential and limitations of this approach, we carry out experiments employing simulated and real user corrections of TED talks videos. Simulated corrections include both the true reference and the best combination of the options shown to the user. Real corrections are obtained from 30 editors through a special purpose web-interface displaying the options for small video segments. We analyze the fixed option approach and the trade-off between model adaptation and increasing the amount of corrected data.
|Name||Lecture Notes In Computer Science|
|Conference||International Conference on Advances in Speech and Language Technologies for Iberian Languages|
|Abbreviated title||IberSPEECH 2016|
|Period||23/11/16 → 25/11/16|
- Automatic Speech Recognition
- Error Analysis