Luke Taylor, Andrew King, Nicol Harper
Spiking neural networks (SNNs), particularly the single-spike variant in which neurons spike at most once, are considerably more energy efficient than standard artificial neural networks (ANNs). However, single-spike SSNs are difficult to train due to their dynamic and non-differentiable nature, where current solutions are either slow or suffer from training instabilities. These networks have also been critiqued for their limited computational applicability such as being unsuitable for time-series datasets. We propose a new model for training single-spike SNNs which mitigates the aforementioned training issues and obtains competitive results across various image and neuromorphic datasets, with up to a $13.98\times$ training speedup and up to an $81\%$ reduction in spikes compared to the multi-spike SNN. Notably, our model performs on par with multi-spike SNNs in challenging tasks involving neuromorphic time-series datasets, demonstrating a broader computational role for single-spike SNNs than previously believed.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Audio Classification | SHD | Percentage correct | 70.32 | FastSNN |
| Image Classification | Fashion-MNIST | Accuracy | 90.57 | FastSNN (CNN) |
| Image Classification | Fashion-MNIST | Accuracy | 89.05 | FastSNN (MLP) |
| Image Classification | N-MNIST | Accuracy | 95.91 | FastSNN |
| Image Classification | MNIST | Accuracy | 99.3 | FastSNN (CNN) |
| Image Classification | MNIST | Accuracy | 97.91 | FastSNN (MLP) |
| Classification | SHD | Percentage correct | 70.32 | FastSNN |