Advanced search
Start date

Deep Boltzmann machines for event recognition in videos

Grant number: 19/07825-1
Support Opportunities:Scholarships in Brazil - Master
Effective date (Start): May 01, 2019
Effective date (End): February 28, 2021
Field of knowledge:Physical Sciences and Mathematics - Computer Science - Computing Methodologies and Techniques
Acordo de Cooperação: Microsoft Research
Principal Investigator:João Paulo Papa
Grantee:Mateus Roder
Host Institution: Faculdade de Ciências (FC). Universidade Estadual Paulista (UNESP). Campus de Bauru. Bauru , SP, Brazil
Host Company:Universidade Estadual Paulista (UNESP). Campus de Rio Claro. Instituto de Geociências e Ciências Exatas (IGCE)
Associated research grant:17/25908-6 - Weakly supervised learning for compressed video analysis on retrieval and classification tasks for visual alert, AP.PITE


This research project takes into account the problem of event recognition in videos, which covers various domains such as monitoring and security, medicine, high-performance industry, and smart houses. Usually, some of these domains are treated with machine learning techniques, highlighting the use of deep learning, capable of generating enough accurate answers from a large set of labeled data, that is, they use the supervised learning paradigm. That said, the scientific community has a part of its efforts focused on techniques that employ the unsupervised learning paradigm, that is, unlabeled data used to extract patterns and deep features in the various problems. However, for some of these tasks, we usually have at least a small amount of labeled data, which its use as a "tool" may positively drive the whole learning process. In this project, we intend to investigate the analysis, retrieval, and classification of videos in the compressed domain using small datasets for training. The main goal of this project is to examine Deep Boltzmann Machines (DBMs), capable of analyzing compressed video sequences and extracting features to feed supervised classifiers. The challenge of the research is to make use of DBMs to investigate, represent and classify videos using restricted labeled data. The proposed approach aims to explore the maximum amount of information available to make the approach appropriate to operate with small sets of training data. We intend to explore: (I) Representations of deep learning; (II) Unsupervised contextual measures and; (III) Fusion techniques, to increase the initially labeled data. The first challenge involves the analysis and representation of videos in the compressed domain, using deep learning techniques. Based on these representations, we intend to investigate strategies to expand the training sets using unsupervised contextual measures. Given the labeled sets obtained, merging strategies will be used to combine several classification methods. Although the methods that will be investigated can be used in several domains, we intend to select domains to validate the proposed approaches, considering the existence of datasets available to carry out experimental evaluations.

News published in Agência FAPESP Newsletter about the scholarship:
Articles published in other media outlets (0 total):
More itemsLess items

Scientific publications
(References retrieved automatically from Web of Science and SciELO through information on FAPESP grants and their corresponding numbers as mentioned in the publications by the authors)
RODER, MATEUS; PASSOS, LEANDRO APARECIDO; DE ROSA, GUSTAVO H.; DE ALBUQUERQUE, VICTOR HUGO C.; PAPA, JOAO PAULO. Reinforcing learning in Deep Belief Networks through nature-inspired optimization. APPLIED SOFT COMPUTING, v. 108, . (19/07825-1, 18/21934-5, 17/25908-6, 19/07665-4, 19/02205-5, 13/07375-0, 14/12236-1)
Academic Publications
(References retrieved automatically from State of São Paulo Research Institutions)
RODER, Mateus. Deep Boltzmann machines for video events recognition. 2021. Master's Dissertation - Universidade Estadual Paulista (Unesp). Faculdade de Ciências. Bauru Bauru.

Please report errors in scientific publications list by writing to: