Automatic organisation, segmentation, and filtering of user-generated audio content

Gonçalo Mordido, João Magalhães, Sofia Cavaco

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

Using solely the information retrieved by audio fingerprinting techniques, we propose methods to treat a possibly large dataset of user-generated audio content, that (1) enable the grouping of several audio files that contain a common audio excerpt (i.e. are relative to the same event), and (2) give information about how those files are correlated in terms of time and quality inside each event. Furthermore, we use supervised learning to detect incorrect matches that may arise from the audio fingerprinting algorithm itself, whilst ensuring our model learns with previous predictions. All the presented methods were further validated by user-generated recordings of several different concerts manually crawled from YouTube.

Original languageEnglish
Title of host publication2017 IEEE 19th International Workshop on Multimedia Signal Processing, MMSP 2017
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1-6
Number of pages6
Volume2017-January
ISBN (Electronic)9781509036493
DOIs
Publication statusPublished - 27 Nov 2017
Event19th IEEE International Workshop on Multimedia Signal Processing, MMSP 2017 - Luton, United Kingdom
Duration: 16 Oct 201718 Oct 2017

Conference

Conference19th IEEE International Workshop on Multimedia Signal Processing, MMSP 2017
Country/TerritoryUnited Kingdom
CityLuton
Period16/10/1718/10/17

Keywords

  • Audio fingerprinting
  • Audio synchronisation
  • Supervised learning
  • User-generated content

Fingerprint

Dive into the research topics of 'Automatic organisation, segmentation, and filtering of user-generated audio content'. Together they form a unique fingerprint.

Cite this