Congress 1695
Author/s
  • Ó. Pereira-Rial, D. García-Lesta, V.M. Brea, P. López, D. Cabello
Source
  • 2022 IEEE International Symposium on Circuits and Systems (ISCAS). Austin (Texas), EEUU. 2022

Design of a 5-bit SRAM-based In-Memory Computing Cell for Deep Learning Models

Neural network mixed-mode hardware accelerators for deep convolutional neural networks (CNN) strive to cope with a high number of input feature maps and increasing bit depths for both weights and inputs. As an example of this need, the ResNet model for image classification comprises 512 3 X 3 feature filters in its conv5 layer. This would lead to 4068 multipliers driving a summing node for actual concurrent processing of all the input feature maps, which makes up a challenge in mixed-mode. This paper addresses the design of a 5-bit signed SRAM-based in-memory computing cell in 180 nm 3.3 V CMOS technology, dealing with the impact of increasing the number of input feature maps. The data presented in the paper are based on electrical and post layout simulations.
Keywords: In-memory computing, Convolution neural networks (ConvNets), Neural network hardware accelerator, mixed-mode, artificial intelligence on the edge
Canonical link