TY - GEN
T1 - Sequence Learning with Analog Neuromorphic Multi-Compartment Neurons and On-Chip Structural STDP
AU - Dietrich, Robin
AU - Spilger, Philipp
AU - Müller, Eric
AU - Schemmel, Johannes
AU - Knoll, Alois C.
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Neuromorphic computing is a candidate for advancing today’s AI systems towards fast and efficient online learning and inference. By exploiting biological principles, mixed-signal neuromorphic chips are well suited for emulating spiking neural networks (SNNs). Nevertheless, especially time-coded SNNs tend to struggle with the noise, uncertainty, and heterogeneity introduced by analog neuromorphic. We improve the robustness of the spiking hierarchical temporal memory (S-HTM) by removing its dependency on exact spike times and thereby enabling its deployment on the analog neuromorphic system BrainScaleS-2. Specifically, we introduce a new, adapted learning rule, implement it on-chip and evaluate it in a fully neuromorphic experiment using analog multi-compartment neurons and synapses on BrainScaleS-2 to learn sequences of symbols. Our results demonstrate that, while the on-chip network generates some overlapping predictions, potentially leading to contextual ambiguity, it is still capable of learning new sequences quickly and robustly, in some cases even faster than the original simulated S-HTM. We further show that the system’s natural heterogeneity, caused by its analog components, can replace the artificial heterogeneity introduced in the simulated network. Overall, the proposed network for BrainScaleS-2 can learn the presented sequences reliably without requiring exact spike times, demonstrating its increased robustness to noise caused by the system’s analog neurons and synapses.
AB - Neuromorphic computing is a candidate for advancing today’s AI systems towards fast and efficient online learning and inference. By exploiting biological principles, mixed-signal neuromorphic chips are well suited for emulating spiking neural networks (SNNs). Nevertheless, especially time-coded SNNs tend to struggle with the noise, uncertainty, and heterogeneity introduced by analog neuromorphic. We improve the robustness of the spiking hierarchical temporal memory (S-HTM) by removing its dependency on exact spike times and thereby enabling its deployment on the analog neuromorphic system BrainScaleS-2. Specifically, we introduce a new, adapted learning rule, implement it on-chip and evaluate it in a fully neuromorphic experiment using analog multi-compartment neurons and synapses on BrainScaleS-2 to learn sequences of symbols. Our results demonstrate that, while the on-chip network generates some overlapping predictions, potentially leading to contextual ambiguity, it is still capable of learning new sequences quickly and robustly, in some cases even faster than the original simulated S-HTM. We further show that the system’s natural heterogeneity, caused by its analog components, can replace the artificial heterogeneity introduced in the simulated network. Overall, the proposed network for BrainScaleS-2 can learn the presented sequences reliably without requiring exact spike times, demonstrating its increased robustness to noise caused by the system’s analog neurons and synapses.
KW - Analog Neuromorphic Hardware
KW - Multi-Compartment Neurons
KW - Sequence Learning
KW - Structural STDP
KW - Unsupervised Learning
UR - http://www.scopus.com/inward/record.url?scp=105000958168&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-82487-6_15
DO - 10.1007/978-3-031-82487-6_15
M3 - Conference contribution
AN - SCOPUS:105000958168
SN - 9783031824869
T3 - Lecture Notes in Computer Science
SP - 207
EP - 230
BT - Machine Learning, Optimization, and Data Science - 10th International Conference, LOD 2024, Revised Selected Papers
A2 - Nicosia, Giuseppe
A2 - Ojha, Varun
A2 - Giesselbach, Sven
A2 - Pardalos, M. Panos
A2 - Umeton, Renato
PB - Springer Science and Business Media Deutschland GmbH
T2 - 10th International Conference on Machine Learning, Optimization, and Data Science, LOD 2024
Y2 - 22 September 2024 through 25 September 2024
ER -