Capturing Uncertainty over Time for Spiking Neural Networks by Exploiting Conformal Prediction Sets

Daniel Scholz, Oliver Emonds, Felix Kreutz, Pascal Gerhards, Jiaxin Huang, Klaus Knobloch, Alois Knoll, Christian Mayr

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

There is a great interest in harnessing the advantages of spiking neural networks. An increasing portion of research is focusing on the deployment of such models. The problem of safe decision making has similarities with classical networks. We apply spiking neural networks to time-series classification tasks where their stateful nature is beneficial. We show that the well-known method of Conformal Prediction (CP) is capable of distinguishing between wrong and correct decisions in this setting similar to but while being less expensive than Evidential Deep Learning and Neural Network Ensembles. In this work we argue that classification uncertainty in time should additionally be considered but is not captured by the length of prediction sets output from CP. Our main contribution addresses the issue that existing CP methods for classification do not consider the aforementioned problem. Our method takes as input the prediction sets which can be output from present conformal prediction and then extends these methods by a smoothed length and combined set algorithm. We apply our method to spiking neural network-based classifiers trained on four different time-series datasets. We show that our method outputs a more suitable uncertainty metric at a given point in time than just the unmodified set length of CP for classification.

Original languageEnglish
Title of host publicationProceedings - 2024 International Conference on Machine Learning and Applications, ICMLA 2024
EditorsM. Arif Wani, Plamen Angelov, Feng Luo, Mitsunori Ogihara, Xintao Wu, Radu-Emil Precup, Ramin Ramezani, Xiaowei Gu
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages107-114
Number of pages8
ISBN (Electronic)9798350374889
DOIs
StatePublished - 2024
Event23rd IEEE International Conference on Machine Learning and Applications, ICMLA 2024 - Miami, United States
Duration: 18 Dec 202420 Dec 2024

Publication series

NameProceedings - 2024 International Conference on Machine Learning and Applications, ICMLA 2024

Conference

Conference23rd IEEE International Conference on Machine Learning and Applications, ICMLA 2024
Country/TerritoryUnited States
CityMiami
Period18/12/2420/12/24

Keywords

  • conformal prediction
  • safe AI
  • spiking neural networks
  • uncertainty quantification

Fingerprint

Dive into the research topics of 'Capturing Uncertainty over Time for Spiking Neural Networks by Exploiting Conformal Prediction Sets'. Together they form a unique fingerprint.

Cite this