TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/End-to-End Environmental Sound Classification using a 1D C...

End-to-End Environmental Sound Classification using a 1D Convolutional Neural Network

Sajjad Abdoli, Patrick Cardinal, Alessandro Lameiras Koerich

2019-04-18Environmental Sound ClassificationSound ClassificationGeneral Classification
PaperPDFCodeCode(official)

Abstract

In this paper, we present an end-to-end approach for environmental sound classification based on a 1D Convolution Neural Network (CNN) that learns a representation directly from the audio signal. Several convolutional layers are used to capture the signal's fine time structure and learn diverse filters that are relevant to the classification task. The proposed approach can deal with audio signals of any length as it splits the signal into overlapped frames using a sliding window. Different architectures considering several input sizes are evaluated, including the initialization of the first convolutional layer with a Gammatone filterbank that models the human auditory filter response in the cochlea. The performance of the proposed end-to-end approach in classifying environmental sounds was assessed on the UrbanSound8k dataset and the experimental results have shown that it achieves 89% of mean accuracy. Therefore, the propose approach outperforms most of the state-of-the-art approaches that use handcrafted features or 2D representations as input. Furthermore, the proposed approach has a small number of parameters compared to other architectures found in the literature, which reduces the amount of data required for training.

Results

TaskDatasetMetricValueModel
Audio ClassificationUrbanSound8KAccuracy891DCNN
Environmental Sound ClassificationUrbanSound8KAccuracy891DCNN
ClassificationUrbanSound8KAccuracy891DCNN

Related Papers

USAD: Universal Speech and Audio Representation via Distillation2025-06-23Acoustic scattering AI for non-invasive object classifications: A case study on hair assessment2025-06-17Disentangling Dual-Encoder Masked Autoencoder for Respiratory Sound Classification2025-06-12MUDAS: Mote-scale Unsupervised Domain Adaptation in Multi-label Sound Classification2025-06-12Domain Adaptation Method and Modality Gap Impact in Audio-Text Models for Prototypical Sound Classification2025-06-04Adaptive Differential Denoising for Respiratory Sounds Classification2025-06-03General-purpose audio representation learning for real-world sound scenes2025-06-01Patient Domain Supervised Contrastive Learning for Lung Sound Classification Using Mobile Phone2025-05-29