Busca avançada
Ano de início
Entree


Calibration-Aided Edge Inference Offloading via Adaptive Model Partitioning of Deep Neural Networks

Texto completo
Autor(es):
Pacheco, Roberto G. ; Couto, Rodrigo S. ; Simeone, Osvaldo ; IEEE
Número total de Autores: 4
Tipo de documento: Artigo Científico
Fonte: IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021); v. N/A, p. 6-pg., 2021-01-01.
Resumo

Mobile devices can offload deep neural network (DNN)-based inference to the cloud, overcoming local hardware and energy limitations. However, offloading adds communication delay, thus increasing the overall inference time, and hence it should be used only when needed. An approach to address this problem consists of the use of adaptive model partitioning based on early-exit DNNs. Accordingly, the inference starts at the mobile device, and an intermediate layer estimates the accuracy: If the estimated accuracy is sufficient, the device takes the inference decision; Otherwise, the remaining layers of the DNN run at the cloud. Thus, the device offloads the inference to the cloud only if it cannot classify a sample with high confidence. This offloading requires a correct accuracy prediction at the device. Nevertheless, DNNs are typically miscalibrated, providing overconfident decisions. This work shows that the employment of a miscalibrated early-exit DNN for offloading via model partitioning can significantly decrease inference accuracy. In contrast, we argue that implementing a calibration algorithm prior to deployment can solve this problem, allowing for more reliable offloading decisions. (AU)

Processo FAPESP: 15/24494-8 - Comunicação e processamento de big data em nuvens e névoas computacionais
Beneficiário:Nelson Luis Saldanha da Fonseca
Modalidade de apoio: Auxílio à Pesquisa - Temático