TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Motion2Language, unsupervised learning of synchronized sem...

Motion2Language, unsupervised learning of synchronized semantic motion segmentation

Karim Radouane, Andon Tchechmedjiev, Julien Lagarde, Sylvie Ranwez

2023-10-16Text GenerationMotion SegmentationMotion CaptioningSemantic Segmentation
PaperPDFCode(official)

Abstract

In this paper, we investigate building a sequence to sequence architecture for motion to language translation and synchronization. The aim is to translate motion capture inputs into English natural-language descriptions, such that the descriptions are generated synchronously with the actions performed, enabling semantic segmentation as a byproduct, but without requiring synchronized training data. We propose a new recurrent formulation of local attention that is suited for synchronous/live text generation, as well as an improved motion encoder architecture better suited to smaller data and for synchronous generation. We evaluate both contributions in individual experiments, using the standard BLEU4 metric, as well as a simple semantic equivalence measure, on the KIT motion language dataset. In a follow-up experiment, we assess the quality of the synchronization of generated text in our proposed approaches through multiple evaluation metrics. We find that both contributions to the attention mechanism and the encoder architecture additively improve the quality of generated text (BLEU and semantic equivalence), but also of synchronization. Our code is available at https://github.com/rd20karim/M2T-Segmentation/tree/main

Results

TaskDatasetMetricValueModel
Motion CaptioningHumanML3DBERTScore37.2MLP+GRU
Motion CaptioningHumanML3DBLEU-423.4MLP+GRU
Motion CaptioningKIT Motion-LanguageBERTScore42.1MLP+GRU
Motion CaptioningKIT Motion-LanguageBLEU-425.4MLP+GRU

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16