Texto completo | |
Autor(es): |
Pinheiro, Giuliano
;
Cirne, Marcos
;
Bestagini, Paolo
;
Tubaro, Stefano
;
Rocha, Anderson
;
IEEE
Número total de Autores: 6
|
Tipo de documento: | Artigo Científico |
Fonte: | 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP); v. N/A, p. 5-pg., 2019-01-01. |
Resumo | |
With an ever-growing amount of unexpected menaces in crowded places such as terrorist attacks, it is paramount to develop techniques to aid investigators reconstructing all details about an event of interest. To extract reliable information about the event, all kinds of available clues must be jointly exploited. As a matter of fact, today's sources of information are plenty and varied, as important events affecting many people are typically documented by different sources. Both witnesses' smartphones and security cameras can provide valuable information coming from multiple viewpoints and time instants - "the eyes of the crowd". In this paper, we focus on the specific problem of automatically detecting and temporally synchronizing videos depicting the same event of interest. Videos can be either near-duplicates (i.e., edited copies of the same original source) or sequences shot by different users from different vantage points. The proposed method relies upon a video fingerprinting technique capable of describing how video semantic content evolves in time. The solution does not assume a priori information about cameras location, and it only exploits visual cues, not relying on audio channels. (AU) | |
Processo FAPESP: | 17/12646-3 - Déjà vu: coerência temporal, espacial e de caracterização de dados heterogêneos para análise e interpretação de integridade |
Beneficiário: | Anderson de Rezende Rocha |
Modalidade de apoio: | Auxílio à Pesquisa - Temático |