Description
Triplet attention comprises of three branches each responsible for capturing crossdimension between the spatial dimensions and channel dimension of the input. Given an input tensor with shape (C × H × W), each branch is responsible for aggregating cross-dimensional interactive features between either the spatial dimension H or W and the channel dimension C.
Papers Using This Method
Achieving 3D Attention via Triplet Squeeze and Excitation Block2025-05-09SNAT-YOLO: Efficient Cross-Layer Aggregation Network for Edge-Oriented Gangue Detection2025-02-09TANet: Triplet Attention Network for All-In-One Adverse Weather Image Restoration2024-10-10Optimization of Autonomous Driving Image Detection Based on RFAConv and Triplet Attention2024-06-25Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers2024-02-07HeTriNet: Heterogeneous Graph Triplet Attention Network for Drug-Target-Disease Interaction2023-11-30Triplet Attention Transformer for Spatiotemporal Predictive Learning2023-10-28MORE: Multi-Order RElation Mining for Dense Captioning in 3D Scenes2022-03-10Rendezvous: Attention Mechanisms for the Recognition of Surgical Action Triplets in Endoscopic Videos2021-09-07Coordinate Attention for Efficient Mobile Network Design2021-03-04Rotate to Attend: Convolutional Triplet Attention Module2020-10-06