TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Rendezvous: Attention Mechanisms for the Recognition of Su...

Rendezvous: Attention Mechanisms for the Recognition of Surgical Action Triplets in Endoscopic Videos

Chinedu Innocent Nwoye, Tong Yu, Cristians Gonzalez, Barbara Seeliger, Pietro Mascagni, Didier Mutter, Jacques Marescaux, Nicolas Padoy

2021-09-07Action Triplet Recognition
PaperPDFCodeCodeCodeCodeCode(official)CodeCodeCode(official)

Abstract

Out of all existing frameworks for surgical workflow analysis in endoscopic videos, action triplet recognition stands out as the only one aiming to provide truly fine-grained and comprehensive information on surgical activities. This information, presented as <instrument, verb, target> combinations, is highly challenging to be accurately identified. Triplet components can be difficult to recognize individually; in this task, it requires not only performing recognition simultaneously for all three triplet components, but also correctly establishing the data association between them. To achieve this task, we introduce our new model, the Rendezvous (RDV), which recognizes triplets directly from surgical videos by leveraging attention at two different levels. We first introduce a new form of spatial attention to capture individual action triplet components in a scene; called Class Activation Guided Attention Mechanism (CAGAM). This technique focuses on the recognition of verbs and targets using activations resulting from instruments. To solve the association problem, our RDV model adds a new form of semantic attention inspired by Transformer networks; called Multi-Head of Mixed Attention (MHMA). This technique uses several cross and self attentions to effectively capture relationships between instruments, verbs, and targets. We also introduce CholecT50 - a dataset of 50 endoscopic videos in which every frame has been annotated with labels from 100 triplet classes. Our proposed RDV model significantly improves the triplet prediction mean AP by over 9% compared to the state-of-the-art methods on this dataset.

Results

TaskDatasetMetricValueModel
Activity RecognitionCholecT50Mean AP29.9Rendezvous (TensorFlow v1)
Activity RecognitionCholecT50Mean AP23.4Attention Tripnet (TensorFlow v1)
Action RecognitionCholecT50Mean AP29.9Rendezvous (TensorFlow v1)
Action RecognitionCholecT50Mean AP23.4Attention Tripnet (TensorFlow v1)

Related Papers

Federated EndoViT: Pretraining Vision Transformers via Federated Learning on Endoscopic Image Collections2025-04-23Surgical Triplet Recognition via Diffusion Model2024-06-19EndoViT: pretraining vision transformers on a large collection of endoscopic images2024-04-03CholecTriplet2022: Show me a tool and tell me the triplet -- an endoscopic vision challenge for surgical action triplet detection2023-02-13Rendezvous in Time: An Attention-based Temporal Fusion approach for Surgical Triplet Recognition2022-11-30Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet Recognition through the Lens of Robustness2022-09-18Dissecting Self-Supervised Learning Methods for Surgical Computer Vision2022-07-01Data Splits and Metrics for Method Benchmarking on Surgical Action Triplet Datasets2022-04-11