Busca avançada
Ano de início
Entree
(Referência obtida automaticamente do Web of Science, por meio da informação sobre o financiamento pela FAPESP e o número do processo correspondente, incluída na publicação pelos autores.)

Transformation-Aware Embeddings for Image Provenance

Texto completo
Autor(es):
Bharati, Aparna [1, 2] ; Moreira, Daniel [1] ; Flynn, Patrick J. [1] ; Rocha, Anderson de Rezende [3] ; Bowyer, Kevin W. [1] ; Scheirer, Walter J. [1]
Número total de Autores: 6
Afiliação do(s) autor(es):
[1] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 - USA
[2] Lehigh Univ, Dept Comp Sci & Engn, Bethlehem, PA 18015 - USA
[3] Univ Estadual Campinas, Inst Comp, BR-13083852 Campinas, SP - Brazil
Número total de Afiliações: 3
Tipo de documento: Artigo Científico
Fonte: IEEE Transactions on Information Forensics and Security; v. 16, p. 2493-2507, 2021.
Citações Web of Science: 0
Resumo

A dramatic rise in the flow of manipulated image content on the Internet has led to a prompt response from the media forensics research community. New mitigation efforts leverage cutting-edge data-driven strategies and increasingly incorporate usage of techniques from computer vision and machine learning to detect and profile the space of image manipulations. This paper addresses Image Provenance Analysis, which aims at discovering relationships among different manipulated image versions that share content. One important task in provenance analysis, like most visual understanding problems, is establishing a visual description and dissimilarity computation method that connects images that share full or partial content. But the existing handcrafted or learned descriptors - generally appropriate for tasks such as object recognition - may not sufficiently encode the subtle differences between near-duplicate image variants, which significantly characterize the provenance of any image. This paper introduces a novel data-driven learning-based approach that provides the context for ordering images that have been generated from a single image source through various transformations. Our approach learns transformation-aware embeddings using weak supervision via composited transformations and a rank-based Edit Sequence Loss. To establish the effectiveness of the proposed approach, comparisons are made with state-of-the-art handcrafted and deep-learning-based descriptors, as well as image matching approaches. Further experimentation validates the proposed approach in the context of image provenance analysis and improves upon existing approaches. (AU)

Processo FAPESP: 17/12646-3 - Déjà vu: coerência temporal, espacial e de caracterização de dados heterogêneos para análise e interpretação de integridade
Beneficiário:Anderson de Rezende Rocha
Modalidade de apoio: Auxílio à Pesquisa - Temático