TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Continual Transformers: Redundancy-Free Attention for Onli...

Continual Transformers: Redundancy-Free Attention for Online Inference

Lukas Hedegaard, Arian Bakhtiarnia, Alexandros Iosifidis

2022-01-17Action DetectionAudio ClassificationOnline Action DetectionTime SeriesClassificationTime Series Analysis
PaperPDFCode(official)

Abstract

Transformers in their common form are inherently limited to operate on whole token sequences rather than on one token at a time. Consequently, their use during online inference on time-series data entails considerable redundancy due to the overlap in successive token sequences. In this work, we propose novel formulations of the Scaled Dot-Product Attention, which enable Transformers to perform efficient online token-by-token inference on a continual input stream. Importantly, our modifications are purely to the order of computations, while the outputs and learned weights are identical to those of the original Transformer Encoder. We validate our Continual Transformer Encoder with experiments on the THUMOS14, TVSeries and GTZAN datasets with remarkable results: Our Continual one- and two-block architectures reduce the floating point operations per prediction by up to 63x and 2.6x, respectively, while retaining predictive performance.

Results

TaskDatasetMetricValueModel
Action DetectionTVSeriesmCAP88.6OadTR
Action DetectionTVSeriesmCAP88.3OadTR-b2
Action DetectionTVSeriesmCAP88.1OadTR-b1
Action DetectionTVSeriesmCAP87.7CoOadTR-b1
Action DetectionTVSeriesmCAP87.6CoOadTR-b2
Action DetectionTHUMOS'14MFLOPs per pred1075.7OadTR-b2
Action DetectionTHUMOS'14mAP64.5OadTR-b2
Action DetectionTHUMOS'14MFLOPs per pred411.9CoOadTR-b2
Action DetectionTHUMOS'14mAP64.4CoOadTR-b2
Action DetectionTHUMOS'14MFLOPs per pred2513.5OadTR
Action DetectionTHUMOS'14mAP64.2OadTR
Action DetectionTHUMOS'14MFLOPs per pred673OadTR-b1
Action DetectionTHUMOS'14mAP63.9OadTR-b1
Action DetectionTHUMOS'14MFLOPs per pred10.6CoOadTR-b1

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17The Power of Architecture: Deep Dive into Transformer Architectures for Long-Term Time Series Forecasting2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Emergence of Functionally Differentiated Structures via Mutual Information Optimization in Recurrent Neural Networks2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16