Video Semantics Quality Assessment Using Deep Learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This work proposes a method to assess the quality of user-generated videos (UGVs) of specific social events. The method is based on matching the semantic information extracted from videos and the information obtained from text news of the same event. Deep learning techniques are used to detect objects in the video scenes. News articles are represented by a set of relevant terms automatically extracted from the news. This paper describes our method and an evaluation of it.

Original languageEnglish
Title of host publicationIntelligent Data Engineering and Automated Learning – IDEAL 2020 - 21st International Conference, 2020, Proceedings
EditorsCesar Analide, Paulo Novais, David Camacho, Hujun Yin
Place of PublicationCham
PublisherSpringer
Pages165-172
Number of pages8
ISBN (Electronic)978-3-030-62365-4
ISBN (Print)978-3-030-62364-7
DOIs
Publication statusPublished - 2020
Event21th International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2020 - Guimaraes, Portugal
Duration: 4 Nov 20206 Nov 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer
Volume12490 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference21th International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2020
Country/TerritoryPortugal
CityGuimaraes
Period4/11/206/11/20

Keywords

  • Semantic deep learning
  • User generated content
  • Video quality assessment

Fingerprint

Dive into the research topics of 'Video Semantics Quality Assessment Using Deep Learning'. Together they form a unique fingerprint.

Cite this