Busca avançada
Ano de início
Entree
(Referência obtida automaticamente do Web of Science, por meio da informação sobre o financiamento pela FAPESP e o número do processo correspondente, incluída na publicação pelos autores.)

Efficient Mitchell's Approximate Log Multipliers for Convolutional Neural Networks

Texto completo
Autor(es):
Kim, Min Soo [1] ; Del Barrio, Alberto A. [2] ; Oliveira, Leonardo Tavares [3] ; Hermida, Roman [2] ; Bagherzadeh, Nader [1]
Número total de Autores: 5
Afiliação do(s) autor(es):
[1] Univ Calif Irvine, Dept Elect Engn & Comp Sci, Irvine, CA 92697 - USA
[2] Univ Complutense Madrid, Comp Architecture & Automat, Madrid 28040 - Spain
[3] Univ Fed Sao Carlos, Comp Engn, BR-13565905 Sao Carlos, SP - Brazil
Número total de Afiliações: 3
Tipo de documento: Artigo Científico
Fonte: IEEE TRANSACTIONS ON COMPUTERS; v. 68, n. 5, p. 660-675, MAY 2019.
Citações Web of Science: 1
Resumo

This paper proposes energy-efficient approximate multipliers based on the Mitchell's log multiplication, optimized for performing inferences on convolutional neural networks (CNN). Various design techniques are applied to the log multiplier, including a fully-parallel LOD, efficient shift amount calculation, and exact zero computation. Additionally, the truncation of the operands is studied to create the customizable log multiplier that further reduces energy consumption. The paper also proposes using the one's complements to handle negative numbers, as an approximation of the two's complements that had been used in the prior works. The viability of the proposed designs is supported by the detailed formal analysis as well as the experimental results on CNNs. The experiments also provide insights into the effect of approximate multiplication in CNNs, identifying the importance of minimizing the range of error. The proposed customizable design at w = 8 saves up to 88 percent energy compared to the exact fixed-point multiplier at 32 bits with just a performance degradation of 0.2 percent for the ImageNet ILSVRC2012 dataset. (AU)

Processo FAPESP: 18/00096-1 - Implementação de uma camada de convolução usando multiplicadores aproximados em FPGA para redes neurais convolucionais
Beneficiário:Leonardo Tavares Oliveira
Modalidade de apoio: Bolsas no Exterior - Estágio de Pesquisa - Iniciação Científica