TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/The Heidelberg spiking datasets for the systematic evaluat...

The Heidelberg spiking datasets for the systematic evaluation of spiking neural networks

Benjamin Cramer, Yannik Stradmann, Johannes Schemmel, Friedemann Zenke

2019-10-16Audio ClassificationGeneral Classification
PaperPDF

Abstract

Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification datasets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Further, we applied this conversion to an existing and a novel speech dataset. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit dataset that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these datasets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.

Results

TaskDatasetMetricValueModel
Audio ClassificationSHDPercentage correct92.4CNN
Audio ClassificationSHDPercentage correct83.2Recurrent SNN
ClassificationSHDPercentage correct92.4CNN
ClassificationSHDPercentage correct83.2Recurrent SNN

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Neuromorphic Wireless Split Computing with Resonate-and-Fire Neurons2025-06-24Fully Few-shot Class-incremental Audio Classification Using Multi-level Embedding Extractor and Ridge Regression Classifier2025-06-23Adaptive Differential Denoising for Respiratory Sounds Classification2025-06-03Spectrotemporal Modulation: Efficient and Interpretable Feature Representation for Classifying Speech, Music, and Environmental Sounds2025-05-29Patient-Aware Feature Alignment for Robust Lung Sound Classification:Cohesion-Separation and Global Alignment Losses2025-05-284,500 Seconds: Small Data Training Approaches for Deep UAV Audio Classification2025-05-21