TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Liquid Structural State-Space Models

Liquid Structural State-Space Models

Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, Daniela Rus

2022-09-26Speech RecognitionHeart rate estimationSpO2 estimationLong-range modelingTime Series Analysis
PaperPDFCode(official)

Abstract

A proper parametrization of state transition matrices of linear state-space models (SSMs) followed by standard nonlinearities enables them to efficiently learn representations from sequential data, establishing the state-of-the-art on a large series of long-range sequence modeling benchmarks. In this paper, we show that we can improve further when the structural SSM such as S4 is given by a linear liquid time-constant (LTC) state-space model. LTC neural networks are causal continuous-time neural networks with an input-dependent state transition module, which makes them learn to adapt to incoming inputs at inference. We show that by using a diagonal plus low-rank decomposition of the state transition matrix introduced in S4, and a few simplifications, the LTC-based structural state-space model, dubbed Liquid-S4, achieves the new state-of-the-art generalization across sequence modeling tasks with long-term dependencies such as image, text, audio, and medical time-series, with an average performance of 87.32% on the Long-Range Arena benchmark. On the full raw Speech Command recognition, dataset Liquid-S4 achieves 96.78% accuracy with a 30% reduction in parameter counts compared to S4. The additional gain in performance is the direct result of the Liquid-S4's kernel structure that takes into account the similarities of the input sequence samples during training and inference.

Results

TaskDatasetMetricValueModel
Speech RecognitionSpeech CommandsAccuracy (%)98.51Liquid-S4
Electrocardiography (ECG)BIDMCMAE [bpm, session-wise]0.303Liquid-S4
ECG ClassificationBIDMCMAE [bpm, session-wise]0.303Liquid-S4
Photoplethysmography (PPG)BIDMCMAE [bpm, session-wise]0.303Liquid-S4
Biomedical Information RetrievalBIDMCMAE [bpm, session-wise]0.066Liquid-S4
Blood pressure estimationBIDMCMAE [bpm, session-wise]0.303Liquid-S4
Medical waveform analysisBIDMCMAE [bpm, session-wise]0.303Liquid-S4

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17Emergence of Functionally Differentiated Structures via Mutual Information Optimization in Recurrent Neural Networks2025-07-17U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15WhisperKit: On-device Real-time ASR with Billion-Scale Transformers2025-07-14LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models2025-07-14VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis2025-07-08A Hybrid Machine Learning Framework for Optimizing Crop Selection via Agronomic and Economic Forecasting2025-07-06