toplogo
Sign In
insight - Neural Networks - # Analog Compute-in-Memory

ASiM: An Open-Source Simulation Framework for Evaluating the Accuracy of SRAM-based Analog Compute-in-Memory for Deep Neural Networks


Core Concepts
Analog Compute-in-Memory (ACiM) offers promising energy efficiency for DNNs, but its accuracy is susceptible to noise, demanding high design standards and innovative solutions like hybrid architectures and majority voting for reliable real-world deployment.
Abstract
  • Bibliographic Information: Zhang, W., Ando, S., Chen, Y., & Yoshioka, K. (2021). ASiM: Improving Transparency of SRAM-based Analog Compute-in-Memory Research with an Open-Source Simulation Framework. Journal of LaTeX Class Files, 14(8), 1-8.

  • Research Objective: This paper introduces ASiM, an open-source simulation framework designed to evaluate the inference accuracy of SRAM-based Analog Compute-in-Memory (ACiM) circuits for Deep Neural Networks (DNNs). The authors aim to address the lack of transparency in ACiM research regarding inference accuracy and provide a tool for guiding design decisions.

  • Methodology: The authors developed ASiM as a plug-and-play tool integrated with the PyTorch ecosystem. It simulates SRAM-based ACiM computations while incorporating various design factors like ADC precision, bit-parallel operations, and analog noise. The framework's accuracy is validated against established metrics like CSNR and through experiments on standard DNN models (ResNet, ViT) and datasets (CIFAR-10, ImageNet).

  • Key Findings: The study reveals that ACiM accuracy is highly sensitive to noise due to the limited dynamic range of ADCs. Even minor ADC errors can significantly degrade accuracy, especially in complex DNN models and tasks. While bit-parallel ACiM enhances energy efficiency, it increases sensitivity to analog noise.

  • Main Conclusions: The authors conclude that achieving high accuracy in ACiM requires careful consideration of design parameters, particularly ADC precision. They propose solutions like hybrid Compute-in-Memory architectures and majority voting to mitigate noise and improve accuracy without compromising energy efficiency.

  • Significance: This research provides valuable insights into the accuracy challenges of ACiM and offers practical solutions for its reliable deployment in real-world DNN applications. The open-source ASiM framework promotes transparency and facilitates further research in this domain.

  • Limitations and Future Research: The paper primarily focuses on SRAM-based ACiM. Exploring the framework's applicability to other ACiM technologies like RRAM and Flash could be a potential area for future research. Additionally, investigating advanced noise mitigation techniques beyond those proposed could further enhance ACiM accuracy.

edit_icon

Customize Summary

edit_icon

Rewrite with AI

edit_icon

Generate Citations

translate_icon

Translate Source

visual_icon

Generate MindMap

visit_icon

Visit Source

Stats
Increasing ADC precision by one bit in the prototype chip used in the study results in less than a 30% energy increase up to 11 bits. Encoding an additional bit for activations reduces ACiM cycles by half, leading to significant energy gains. Encoding activations with 4 bits can reduce energy per classification by nearly fourfold across most models and datasets with minimal accuracy loss.
Quotes
"Despite of successive improvement on energy efficiency of ACiM designs, discussion regarding inference accuracy remains limited." "Our analysis reveals that even minor ADC readout errors of 1 LSB can propagate through the network and cause significant accuracy degradation." "While bit-parallel ACiM can tolerate some quantization noise and thus significantly boost ACiM energy efficiency, it is more sensitive to analog noise, necessitating stricter design standards for ACiM circuits."

Deeper Inquiries

0
star