TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Decentralizing Feature Extraction with Quantum Convolution...

Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition

Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Pin-Yu Chen, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee

2020-10-26Speech RecognitionKeyword SpottingAutomatic Speech RecognitionAutomatic Speech Recognition (ASR)speech-recognitionFederated Learning
PaperPDFCodeCode(official)

Abstract

We propose a novel decentralized feature extraction approach in federated learning to address privacy-preservation issues for speech recognition. It is built upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction, and a recurrent neural network (RNN) based end-to-end acoustic model (AM). To enhance model parameter protection in a decentralized architecture, an input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram, and the corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters. The encoded features are then down-streamed to the local RNN model for the final recognition. The proposed decentralized framework takes advantage of the quantum learning progress to secure models and to avoid privacy leakage attacks. Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95.12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features. We also conduct an in-depth study of different quantum circuit encoder architectures to provide insights into designing QCNN-based feature extractors. Neural saliency analyses demonstrate a correlation between the proposed QCNN features, class activation maps, and input spectrograms. We provide an implementation for future studies.

Results

TaskDatasetMetricValueModel
Keyword SpottingGoogle Speech Commands10-keyword Speech Commands dataset95.12Quantum CNN

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17FedGA: A Fair Federated Learning Framework Based on the Gini Coefficient2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Federated Learning for Commercial Image Sources2025-07-17A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning2025-07-16Federated Learning in Open- and Closed-Loop EMG Decoding: A Privacy and Performance Perspective2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16