Abstract
Multimedia information have strong temporal correlations that shape the way modalities co-occur over time. In this paper we study the dynamic nature of multimedia and social-media information, where the temporal dimension emerges as a strong source of evidence for learning the temporal correlations across visual and textual modalities. So far, cross-media retrieval models, explored the correlations between different modalities (e.g. text and image) to learn a common subspace, in which semantically similar instances lie in the same neighbourhood. Building on such knowledge, we propose a novel temporal cross-media neural architecture, that departs from standard cross-media methods, by explicitly accounting for the temporal dimension through temporal subspace learning. The model is softly-constrained with temporal and inter-modality constraints that guide the new subspace learning task by favouring temporal correlations between semantically similar and temporally close instances. Experiments on three distinct datasets show that accounting for time turns out to be important for cross-media retrieval. Namely, the proposed method outperforms a set of baselines on the task of temporal cross-media retrieval, demonstrating its effectiveness for performing temporal subspace learning.
Original language | English |
---|---|
Title of host publication | MM 2018 - Proceedings of the 2018 ACM Multimedia Conference |
Publisher | ACM - Association for Computing Machinery |
Pages | 1038-1046 |
Number of pages | 9 |
ISBN (Electronic) | 9781450356657 |
DOIs | |
Publication status | Published - 15 Oct 2018 |
Event | 26th ACM Multimedia conference, MM 2018 - Seoul, Korea, Republic of Duration: 22 Oct 2018 → 26 Oct 2018 |
Conference
Conference | 26th ACM Multimedia conference, MM 2018 |
---|---|
Country/Territory | Korea, Republic of |
City | Seoul |
Period | 22/10/18 → 26/10/18 |
Keywords
- Cross-media
- Multimedia retrieval
- Temporal cross-media
- Temporal smoothing