TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ATCN: Resource-Efficient Processing of Time Series on Edge

ATCN: Resource-Efficient Processing of Time Series on Edge

Mohammadreza Baharani, Hamed Tabkhi

2020-11-10Heartbeat ClassificationTime Series PredictionGeneral ClassificationTime SeriesTime Series Analysis
PaperPDFCode(official)

Abstract

This paper presents a scalable deep learning model called Agile Temporal Convolutional Network (ATCN) for high-accurate fast classification and time series prediction in resource-constrained embedded systems. ATCN is a family of compact networks with formalized hyperparameters that enable application-specific adjustments to be made to the model architecture. It is primarily designed for embedded edge devices with very limited performance and memory, such as wearable biomedical devices and real-time reliability monitoring systems. ATCN makes fundamental improvements over the mainstream temporal convolutional neural networks, including residual connections to increase the network depth and accuracy, and the incorporation of separable depth-wise convolution to reduce the computational complexity of the model. As part of the present work, two ATCN families, namely T0, and T1 are also presented and evaluated on different ranges of embedded processors - Cortex-M7 and Cortex-A57 processor. An evaluation of the ATCN models against the best-in-class InceptionTime and MiniRocket shows that ATCN almost maintains accuracy while improving the execution time on a broad range of embedded and cyber-physical applications with demand for real-time processing on the embedded edge. At the same time, in contrast to existing solutions, ATCN is the first time-series classifier based on deep learning that can be run bare-metal on embedded microcontrollers (Cortex-M7) with limited computational performance and memory capacity while delivering state-of-the-art accuracy.

Related Papers

MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17The Power of Architecture: Deep Dive into Transformer Architectures for Long-Term Time Series Forecasting2025-07-17Emergence of Functionally Differentiated Structures via Mutual Information Optimization in Recurrent Neural Networks2025-07-17Data Augmentation in Time Series Forecasting through Inverted Framework2025-07-15D3FL: Data Distribution and Detrending for Robust Federated Learning in Non-linear Time-series Data2025-07-15Wavelet-Enhanced Neural ODE and Graph Attention for Interpretable Energy Forecasting2025-07-14Towards Interpretable Time Series Foundation Models2025-07-10MoFE-Time: Mixture of Frequency Domain Experts for Time-Series Forecasting Models2025-07-09