Description
A Bottleneck Transformer Block is a block used in Bottleneck Transformers that replaces the spatial 3 × 3 convolution layer in a Residual Block with Multi-Head Self-Attention (MHSA).
Papers Using This Method
Robust Multimodal Survival Prediction with the Latent Differentiation Conditional Variational AutoEncoder2025-03-12Robust Multimodal Survival Prediction with Conditional Latent Differentiation Variational AutoEncoder2025-01-01Multi-scale Bottleneck Transformer for Weakly Supervised Multimodal Violence Detection2024-05-08Rock Classification Based on Residual Networks2024-02-19SVFAP: Self-supervised Video Facial Affect Perceiver2023-12-31Learning Bottleneck Transformer for Event Image-Voxel Feature Fusion based Classification2023-08-23Marine Debris Detection in Satellite Surveillance using Attention Mechanisms2023-07-09Cross-Domain Synthetic-to-Real In-the-Wild Depth and Normal Estimation for 3D Scene Understanding2022-12-09AGMB-Transformer: Anatomy-Guided Multi-Branch Transformer Network for Automated Evaluation of Root Canal Therapy2021-05-02Bottleneck Transformers for Visual Recognition2021-01-27