TY - GEN
T1 - Technology/Algorithm Co-Design for Robust Brain-Inspired Hyperdimensional In-memory Computing
AU - Genssler, Paul R.
AU - Amrouch, Hussam
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Semiconductor technology scaling is reaching its limits. At 2 nm, a transistor comprises only a few atoms, making these technology nodes susceptible to variation and defects. However, current algorithms rely on robust and accurate hardware. Regaining this required robustness decreases the gains from technology scaling. Therefore, it is not the technology but the algorithms that have to become more robust. Brain-inspired hyperdimensional computing (HDC) is such a machine learning algorithm [1], [2] that can tolerate errors in memory and computation [3], [4]. The robustness is achieved by utilizing inherently redundant vectors, which, in turn, increase memory consumption, challenging the traditional von Neumann architecture [5]. In-memory computing architectures are a promising solution to reduce energy-intensive data transfers between the CPU and off-chip memory. Several tradeoffs arise, such as an increasing capacity improving the inference accuracy of the HDC model but requiring more energy and chip area. Other in-memory design decision reduce energy consumption but also the inference accuracy. A complex design space is created where decision at the technology level impact the algorithm and vice versa [6].
AB - Semiconductor technology scaling is reaching its limits. At 2 nm, a transistor comprises only a few atoms, making these technology nodes susceptible to variation and defects. However, current algorithms rely on robust and accurate hardware. Regaining this required robustness decreases the gains from technology scaling. Therefore, it is not the technology but the algorithms that have to become more robust. Brain-inspired hyperdimensional computing (HDC) is such a machine learning algorithm [1], [2] that can tolerate errors in memory and computation [3], [4]. The robustness is achieved by utilizing inherently redundant vectors, which, in turn, increase memory consumption, challenging the traditional von Neumann architecture [5]. In-memory computing architectures are a promising solution to reduce energy-intensive data transfers between the CPU and off-chip memory. Several tradeoffs arise, such as an increasing capacity improving the inference accuracy of the HDC model but requiring more energy and chip area. Other in-memory design decision reduce energy consumption but also the inference accuracy. A complex design space is created where decision at the technology level impact the algorithm and vice versa [6].
UR - http://www.scopus.com/inward/record.url?scp=85190383606&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF59524.2023.10477030
DO - 10.1109/IEEECONF59524.2023.10477030
M3 - Conference contribution
AN - SCOPUS:85190383606
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 280
BT - Conference Record of the 57th Asilomar Conference on Signals, Systems and Computers, ACSSC 2023
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 57th Asilomar Conference on Signals, Systems and Computers, ACSSC 2023
Y2 - 29 October 2023 through 1 November 2023
ER -