Visual analytics for spatiotemporal events

Ricardo Almeida Silva, João Moura Pires, Nuno Datia, Maribel Yasmina Santos, Bruno Martins, Fernando Birra

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)
68 Downloads (Pure)

Abstract

Crimes, forest fires, accidents, infectious diseases, or human interactions with mobile devices (e.g., tweets) are being logged as spatiotemporal events. For each event, its geographic location, time and related attributes are known with high levels of detail (LoDs). The LoD plays a crucial role when analyzing data, as it can highlight useful patterns or insights and enhance the user’ perception of phenomena. For this reason, modeling phenomena at different LoDs is needed to increase the analytical value of the data, as there is no exclusive LOD at which the data can be analyzed. Current practices work mainly on a single LoD of the phenomena, driven by the analysts’ perception, ignoring that identifying the suitable LoDs is a key issue for pointing relevant patterns. This article presents a Visual Analytics approach called VAST, that allows users to simultaneously inspect a phenomenon at different LoDs, helping them to see in what LoDs do interesting patterns emerge, or in what LoDs the perception of the phenomenon is different. In this way, the analysis of vast amounts of spatiotemporal events is assisted, guiding the user in this process. The use of several synthetic and real datasets supported the evaluation and validation of VAST, suggesting LoDs with different interesting spatiotemporal patterns and pointing the type of expected patterns.

Original languageEnglish
Pages (from-to)32805-32847
Number of pages43
JournalMultimedia Tools and Applications
Volume78
Issue number23
DOIs
Publication statusPublished - 1 Dec 2019

Keywords

  • Data visualization
  • Multiple levels of detail
  • Spatiotemporal patterns
  • Visual analytics

Fingerprint

Dive into the research topics of 'Visual analytics for spatiotemporal events'. Together they form a unique fingerprint.

Cite this