Busca avançada
Ano de início
Entree
(Referência obtida automaticamente do Web of Science, por meio da informação sobre o financiamento pela FAPESP e o número do processo correspondente, incluída na publicação pelos autores.)

Exposing computer generated images by using deep convolutional neural networks

Texto completo
Autor(es):
de Rezende, Edmar R. S. [1] ; Ruppert, Guilherme C. S. [1] ; Theophilo, Antonio [1] ; Tokuda, Eric K. [2] ; Carvalho, Tiago [3]
Número total de Autores: 5
Afiliação do(s) autor(es):
[1] CTI Renato Archer, BR-13069901 Campinas, SP - Brazil
[2] Univ Sao Paulo, BR-05008090 Sao Paulo, SP - Brazil
[3] Fed Inst Sao Paulo, BR-13069901 Campinas, SP - Brazil
Número total de Afiliações: 3
Tipo de documento: Artigo Científico
Fonte: SIGNAL PROCESSING-IMAGE COMMUNICATION; v. 66, p. 113-126, AUG 2018.
Citações Web of Science: 2
Resumo

The recent computer graphics developments have upraised the quality of the generated digital content, astonishing the most skeptical viewer. Games and movies have taken advantage of this fact but, at the same time, these advances have brought serious negative impacts like the ones yielded by fake images produced with malicious intents. Digital artists can compose artificial images capable of deceiving the great majority of people, turning this into a very dangerous weapon in a timespan currently know as ``Fake News/Post-Truth{''} Era. In this work, we propose a new approach for dealing with the problem of detecting computer generated images, through the application of deep convolutional networks and transfer learning techniques. We start from Residual Networks and develop different models adapted to the binary problem of identifying if an image was, or not, computer generated. Differently from the current state-of-the-art approaches, we do not rely on hand-crafted features, but provide to the model the raw pixel information, achieving the same 0.97 performance of state-of-the-art methods with three main advantages: (i) executes considerably faster than state-of-the-art methods with equivalent accuracy; (ii) eliminates the laborious and manual step of specialized features extraction and selection, and (iii) is very robust against image processing operations as noise addition, blur and JPEG compression. (AU)

Processo FAPESP: 17/12646-3 - Déjà vu: coerência temporal, espacial e de caracterização de dados heterogêneos para análise e interpretação de integridade
Beneficiário:Anderson de Rezende Rocha
Linha de fomento: Auxílio à Pesquisa - Temático
Processo FAPESP: 17/12631-6 - Desenvolvendo e disseminando métodos e ferramentas para a análise forense de documentos digitais
Beneficiário:Tiago Jose de Carvalho
Linha de fomento: Auxílio à Pesquisa - Regular