Advanced search
Start date
Betweenand


An Evaluation of Temporal Neighborhood Coding Variants in Smartphone-Based Human Activity Recognition

Full text
Author(s):
da Luz, Gustavo P. C. P. ; Napoli, Otavio O. ; Delgado, J. V. ; Rocha, Anderson R. ; Boccato, Levy ; Borin, Edson
Total Authors: 6
Document type: Journal article
Source: INTELLIGENT SYSTEMS, BRACIS 2024, PT III; v. 15414, p. 13-pg., 2025-01-01.
Abstract

Self-Supervised Learning (SSL) has emerged as a powerful tool for learning valuable representations from large amounts of unlabeled data. Human Activity Recognition (HAR) is a field that may benefit from SSL techniques, as there is a large amount of unlabeled data, and labeling is time-consuming and costly. Temporal Neighborhood Coding (TNC) is a SSL technique that shows promise in extracting meaningful features automatically. However, the effectiveness of different TNC variations for HAR data has yet to be comprehensively evaluated. Current research focuses on specific contexts, leaving a gap in understanding how these variations perform across the same dataset. Additionally, it is necessary to assess the impact of applying TNC on raw data instead of on handcrafted features. This paper systematically evaluates different variations of TNC for HAR, investigating their performances in a standardized quantitative and qualitative approach, focusing on raw data but comparing with featured engineered data of the same dataset. Our findings show that using the dilated convolution encoder proposed by TS2Vec is currently the best alternative in terms of performance, achieving 95% accuracy in the UCI dataset with raw data. Additionally, we verified that by replacing the Augmented Dickey-Fuller (ADF) statistical test with the cosine similarity to select neighboring windows makes training time seven to nine times faster with little performance impact. Finally, we found that learning from handcrafted features from UCI dataset is easier, but advanced versions of TNC can effectively learn robust features from raw data, achieving performance comparable to models trained on the handcrafted features. (AU)

FAPESP's process: 13/08293-7 - CCES - Center for Computational Engineering and Sciences
Grantee:Munir Salomao Skaf
Support Opportunities: Research Grants - Research, Innovation and Dissemination Centers - RIDC