CAB

Contextual Attention Block

GeneralIntroduced 20002 papers

Description

The Contextual Attention Block (CAB) is a new plug-and-play module to model context awareness. It is simple and effective and can be integrated with any feed-forward neural network.

CAB infers weights that multiply the feature maps according to their causal influence on the scene, modeling the co-occurrence of different objects in the image.

You can place the CAB module at different bottlenecks to infuse a hierarchical context awareness into the model.

Papers Using This Method