TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SOM-VAE: Interpretable Discrete Representation Learning on...

SOM-VAE: Interpretable Discrete Representation Learning on Time Series

Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch

2018-06-06ICLR 2019 5Dimensionality ReductionRepresentation LearningTime Series ClusteringClusteringTime SeriesTime Series Analysis
PaperPDFCodeCode(official)CodeCodeCodeCode

Abstract

High-dimensional time series are common in many domains. Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations. However, most representation learning algorithms for time series data are difficult to interpret. This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time. To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling. This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance. We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original. Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space. This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty. We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set. Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data.

Results

TaskDatasetMetricValueModel
Time Series ClusteringeICU Collaborative Research DatabaseNMI (physiology_24_hours)0.0421SOM-VAE-prob
Time Series ClusteringeICU Collaborative Research DatabaseNMI (physiology_6_hours)0.0474SOM-VAE-prob
Time Series ClusteringeICU Collaborative Research DatabaseNMI (physiology_12_hours)0.0444SOM-VAE
Time Series ClusteringeICU Collaborative Research DatabaseNMI (physiology_24_hours)0.0354SOM-VAE
Time Series ClusteringeICU Collaborative Research DatabaseNMI (physiology_6_hours)0.0407SOM-VAE
Time Series AnalysiseICU Collaborative Research DatabaseNMI (physiology_24_hours)0.0421SOM-VAE-prob
Time Series AnalysiseICU Collaborative Research DatabaseNMI (physiology_6_hours)0.0474SOM-VAE-prob
Time Series AnalysiseICU Collaborative Research DatabaseNMI (physiology_12_hours)0.0444SOM-VAE
Time Series AnalysiseICU Collaborative Research DatabaseNMI (physiology_24_hours)0.0354SOM-VAE
Time Series AnalysiseICU Collaborative Research DatabaseNMI (physiology_6_hours)0.0407SOM-VAE

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Volatility Spillovers and Interconnectedness in OPEC Oil Markets: A Network-Based log-ARCH Approach2025-07-20Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17The Power of Architecture: Deep Dive into Transformer Architectures for Long-Term Time Series Forecasting2025-07-17Emergence of Functionally Differentiated Structures via Mutual Information Optimization in Recurrent Neural Networks2025-07-17