Advanced search
Start date
Betweenand
(Reference retrieved automatically from Web of Science through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

Dynamic texture analysis using networks generated by deterministic partially self-avoiding walks

Full text
Author(s):
Ribas, Lucas C. [1, 2] ; Bruno, Odemir M. [1, 2]
Total Authors: 2
Affiliation:
[1] Univ Sao Paulo, Sao Carlos Inst Phys, Sci Comp Grp, POB 369, BR-13560970 Sao Carlos, SP - Brazil
[2] Univ Sao Paulo, Inst Math & Comp Sci, Ave Trabalhador Sao Carlense 400, BR-13566590 Sao Carlos, SP - Brazil
Total Affiliations: 2
Document type: Journal article
Source: PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS; v. 541, MAR 1 2020.
Web of Science Citations: 0
Abstract

Dynamic textures are sequences of images (video) with the concept of texture patterns extended to the spatiotemporal domain. This research field has attracted attention due to the range of applications in different areas of science and the emergence of a large number of multimedia datasets. Unlike the static textures, the methods for dynamic texture analysis also need to deal with the time domain, which increases the challenge for representation. Thus, it is important to obtain features that properly describe the appearance and motion properties of the dynamic texture. In this paper, we propose a new method for dynamic texture analysis based on Deterministic Partially Self-avoiding Walks (DPSWs) and network science theory. Here, each pixel of the video is considered a vertex of the network and the edges are given by the movements of the deterministic walk between the pixels. The feature vector is obtained by calculating network measures from the networks generated by the DPSWs. The modeled networks incorporate important characteristics of the DPSWs transitivity and their spatial arrangement in the video. In this paper, two different strategies are tested to apply the DPSWs in the video and capture appearance and motion characteristics. One strategy applies the DPSWs in three orthogonal planes and the other is based on spatial and temporal descriptors. We validate our proposed method in Dyntex++ and UCLA (and their variants) databases, which are two well-known dynamic texture databases. The results have demonstrated the effectiveness of the proposed approach using a small feature vector for both strategies. Also, the proposed method improved the performance when compared to the previous DPSW-based method and the network-based method. (C) 2019 Published by Elsevier B.V. (AU)

FAPESP's process: 16/23763-8 - Modeling and analysis of complex networks for computer vision
Grantee:Lucas Correia Ribas
Support type: Scholarships in Brazil - Doctorate
FAPESP's process: 16/18809-9 - Deep learning and complex networks applied to computer vision
Grantee:Odemir Martinez Bruno
Support type: Research Grants - Research Partnership for Technological Innovation - PITE
FAPESP's process: 14/08026-1 - Artificial vision and pattern recognition applied to vegetal plasticity
Grantee:Odemir Martinez Bruno
Support type: Regular Research Grants