Advanced search
Start date
Betweenand
(Reference retrieved automatically from Web of Science through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

Learning graph representation with Randomized Neural Network for dynamic texture classification

Full text
Author(s):
Ribas, Lucas C. [1, 2, 3] ; de Mesquita Sa Junior, Jarbas Joaci [4] ; Manzanera, Antoine [3] ; Bruno, Odemir M. [1, 2]
Total Authors: 4
Affiliation:
[1] Univ Sao Paulo, Sao Carlos Inst Phys, POB 369, BR-13560970 Sao Carlos, SP - Brazil
[2] Univ Sao Paulo, Inst Math & Comp Sci, Ave Trabalhador Sao Carlense 400, BR-13566590 Sao Carlos, SP - Brazil
[3] Inst Polytech Paris, ENSTA Paris, U2IS, 828 Blvd Marechaux, F-91120 Palaiseau - France
[4] Univ Fed Ceara, Ctr Sobral, Programa Posgrad Engn Elect & Comp, Curso Engn Comp, Campus Sobral Rua Coronel Estanislau Frota 563, BR-62010560 Fortaleza, Ceara - Brazil
Total Affiliations: 4
Document type: Journal article
Source: APPLIED SOFT COMPUTING; v. 114, JAN 2022.
Web of Science Citations: 0
Abstract

Dynamic textures (DTs) are pseudo periodic data on a space x time support, that can represent many natural phenomena captured from video footages. Their modeling and recognition are useful in many applications of computer vision. This paper presents an approach for DT analysis combining a graph-based description from the Complex Network framework, and a learned representation from the Randomized Neural Network (RNN) model. First, a directed space x time graph modeling with only one parameter (radius) is used to represent both the motion and the appearance of the DT. Then, instead of using classical graph measures as features, the DT descriptor is learned using a RNN, that is trained to predict the gray level of pixels from local topological measures of the graph. The weight vector of the output layer of the RNN forms the descriptor. Several structures are experimented for the RNNs, resulting in networks with final characteristics of a single hidden layer of 4, 24, or 29 neurons, and input layers of sizes 4 or 10, meaning 6 different RNNs. Experimental results on DT recognition conducted on Dyntex++ and UCLA datasets show a high discriminatory power of our descriptor, providing an accuracy of 99.92%, 98.19%, 98.94% and 95.03% on the UCLA-50, UCLA-9, UCLA-8 and Dyntex++ databases, respectively. These results outperform various literature approaches, particularly for UCLA-50. More significantly, our method is competitive in terms of computational efficiency and descriptor size. It is therefore a good option for real-time dynamic texture segmentation, as illustrated by experiments conducted on videos acquired from a moving boat. (C) 2021 Published by Elsevier B.V. (AU)

FAPESP's process: 16/23763-8 - Modeling and analysis of complex networks for computer vision
Grantee:Lucas Correia Ribas
Support Opportunities: Scholarships in Brazil - Doctorate
FAPESP's process: 18/22214-6 - Towards a convergence of technologies: from sensing and biosensing to information visualization and machine learning for data analysis in clinical diagnosis
Grantee:Osvaldo Novais de Oliveira Junior
Support Opportunities: Research Projects - Thematic Grants
FAPESP's process: 16/18809-9 - Deep learning and complex networks applied to computer vision
Grantee:Odemir Martinez Bruno
Support Opportunities: Research Grants - Research Partnership for Technological Innovation - PITE
FAPESP's process: 14/08026-1 - Artificial vision and pattern recognition applied to vegetal plasticity
Grantee:Odemir Martinez Bruno
Support Opportunities: Regular Research Grants
FAPESP's process: 19/03277-0 - Pattern Recognition in Complex Networks using Distance Transform
Grantee:Lucas Correia Ribas
Support Opportunities: Scholarships abroad - Research Internship - Doctorate