TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ECA-Net: Efficient Channel Attention for Deep Convolutiona...

ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, WangMeng Zuo, QinGhua Hu

2019-10-08CVPR 2020 6Dimensionality ReductionImage ClassificationSemantic SegmentationInstance Segmentationobject-detectionObject Detection
PaperPDFCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCode

Abstract

Recently, channel attention mechanism has demonstrated to offer great potential in improving the performance of deep convolutional neural networks (CNNs). However, most existing methods dedicate to developing more sophisticated attention modules for achieving better performance, which inevitably increase model complexity. To overcome the paradox of performance and complexity trade-off, this paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain. By dissecting the channel attention module in SENet, we empirically show avoiding dimensionality reduction is important for learning channel attention, and appropriate cross-channel interaction can preserve performance while significantly decreasing model complexity. Therefore, we propose a local cross-channel interaction strategy without dimensionality reduction, which can be efficiently implemented via $1D$ convolution. Furthermore, we develop a method to adaptively select kernel size of $1D$ convolution, determining coverage of local cross-channel interaction. The proposed ECA module is efficient yet effective, e.g., the parameters and computations of our modules against backbone of ResNet50 are 80 vs. 24.37M and 4.7e-4 GFLOPs vs. 3.86 GFLOPs, respectively, and the performance boost is more than 2% in terms of Top-1 accuracy. We extensively evaluate our ECA module on image classification, object detection and instance segmentation with backbones of ResNets and MobileNetV2. The experimental results show our module is more efficient while performing favorably against its counterparts.

Results

TaskDatasetMetricValueModel
Object DetectionDSECmAP25.7ECANet
Object DetectionPKU-DDD17-Car mAP5082.2ECANet
Image ClassificationImageNetGFLOPs10.83ECA-Net (ResNet-152)
Image ClassificationImageNetGFLOPs7.35ECA-Net (ResNet-101)
Image ClassificationImageNetGFLOPs3.86ECA-Net (ResNet-50)
Image ClassificationImageNetGFLOPs0.32ECA-Net (MobileNetV2)
3DDSECmAP25.7ECANet
3DPKU-DDD17-Car mAP5082.2ECANet
2D ClassificationDSECmAP25.7ECANet
2D ClassificationPKU-DDD17-Car mAP5082.2ECANet
2D Object DetectionDSECmAP25.7ECANet
2D Object DetectionPKU-DDD17-Car mAP5082.2ECANet
16kDSECmAP25.7ECANet
16kPKU-DDD17-Car mAP5082.2ECANet

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17