Full text | |
Author(s): |
Pinheiro, Giuliano
;
Cirne, Marcos
;
Bestagini, Paolo
;
Tubaro, Stefano
;
Rocha, Anderson
;
IEEE
Total Authors: 6
|
Document type: | Journal article |
Source: | 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP); v. N/A, p. 5-pg., 2019-01-01. |
Abstract | |
With an ever-growing amount of unexpected menaces in crowded places such as terrorist attacks, it is paramount to develop techniques to aid investigators reconstructing all details about an event of interest. To extract reliable information about the event, all kinds of available clues must be jointly exploited. As a matter of fact, today's sources of information are plenty and varied, as important events affecting many people are typically documented by different sources. Both witnesses' smartphones and security cameras can provide valuable information coming from multiple viewpoints and time instants - "the eyes of the crowd". In this paper, we focus on the specific problem of automatically detecting and temporally synchronizing videos depicting the same event of interest. Videos can be either near-duplicates (i.e., edited copies of the same original source) or sequences shot by different users from different vantage points. The proposed method relies upon a video fingerprinting technique capable of describing how video semantic content evolves in time. The solution does not assume a priori information about cameras location, and it only exploits visual cues, not relying on audio channels. (AU) | |
FAPESP's process: | 17/12646-3 - Déjà vu: feature-space-time coherence from heterogeneous data for media integrity analytics and interpretation of events |
Grantee: | Anderson de Rezende Rocha |
Support Opportunities: | Research Projects - Thematic Grants |