Busca avançada
Ano de início
Entree


Mini-batching with Fused Training and Testing for Data Streams Processing on the Edge

Texto completo
Autor(es):
Luna, Reginaldo ; Cassales, Guilherme ; Pfahringer, Bernhard ; Bifet, Albert ; Gomes, Heitor Murilo ; Senger, Hermes
Número total de Autores: 6
Tipo de documento: Artigo Científico
Fonte: PROCEEDINGS OF THE 21ST ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS 2024, CF 2024; v. N/A, p. 10-pg., 2024-01-01.
Resumo

Edge Computing (EC) has emerged as a solution to reduce energy demand and greenhouse gas emissions from digital technologies. EC supports low latency, mobility, and location awareness for delay-sensitive applications by bridging the gap between cloud computing services and end-users. Machine learning (ML) methods have been applied in EC for data classification and information processing. Ensemble learners have often proven to yield high predictive performance on data stream classification problems. Mini-batching is a technique proposed for improving cache reuse in multi-core architectures of bagging ensembles for the classification of online data streams, which benefits application speedup and reduces energy consumption. However, the original mini-batching presents limited benefits in terms of cache reuse and it hinders the accuracy of the ensembles (i.e., their capacity to detect behavior changes in data streams). In this paper, we improve mini-batching by fusing continuous training and test loops for the classification of data streams. We evaluated the new strategy by comparing its performance and energy efficiency with the original mini-batching for data stream classification using six ensemble algorithms and four benchmark datasets. We also compare mini-batching strategies with two hardware-based strategies supported by commodity multi-core processors commonly used in EC. Results show that mini-batching strategies can significantly reduce energy consumption in 95% of the experiments. Mini-batching improved energy efficiency by 96% on average and 169% in the best case. Likewise, our new mini-batching strategy improved energy efficiency by 136% on average and 456% in the best case. These strategies also support better control of the balance between performance, energy efficiency, and accuracy. (AU)

Processo FAPESP: 19/26702-8 - Tendências em computação de alto desempenho, do gerenciamento de recursos a novas arquiteturas de computadores
Beneficiário:Alfredo Goldman vel Lejbman
Modalidade de apoio: Auxílio à Pesquisa - Temático
Processo FAPESP: 23/00566-6 - SUSTAINABLE - Suporte a Aplicações de Inteligência Artificial com Escalabilidade e Eficiência na Borda
Beneficiário:Hermes Senger
Modalidade de apoio: Auxílio à Pesquisa - Regular