TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Global Average Pooling

Global Average Pooling

Computer VisionIntroduced 20004076 papers
Source Paper

Description

Global Average Pooling is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the softmax layer.

One advantage of global average pooling over the fully connected layers is that it is more native to the convolution structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.

Papers Using This Method

Automated MRI Tumor Segmentation using hybrid U-Net with Transformer and Efficient Attention2025-06-18Detecting immune cells with label-free two-photon autofluorescence and deep learning2025-06-17Deploying and Evaluating Multiple Deep Learning Models on Edge Devices for Diabetic Retinopathy Detection2025-06-14SecONNds: Secure Outsourced Neural Network Inference on ImageNet2025-06-13Circumventing Backdoor Space via Weight Symmetry2025-06-09Analyzing Breast Cancer Survival Disparities by Race and Demographic Location: A Survival Analysis Approach2025-06-08Gradual Transition from Bellman Optimality Operator to Bellman Operator in Online Reinforcement Learning2025-06-06Synthetic Speech Source Tracing using Metric Learning2025-06-03PointODE: Lightweight Point Cloud Learning with Neural Ordinary Differential Equations on Edge2025-05-31Stepsize anything: A unified learning rate schedule for budgeted-iteration training2025-05-30ACM-UNet: Adaptive Integration of CNNs and Mamba for Efficient Medical Image Segmentation2025-05-30Optimal Weighted Convolution for Classification and Denosing2025-05-30Knowledge Distillation for Reservoir-based Classifier: Human Activity Recognition2025-05-29Leveraging Diffusion Models for Synthetic Data Augmentation in Protein Subcellular Localization Classification2025-05-28Intelligent Incident Hypertension Prediction in Obstructive Sleep Apnea2025-05-27Lung Nodule Segmentation: Exploring Data Efficiency and Advanced Architectures2025-05-26Structured Initialization for Vision Transformers2025-05-26Hierarchical-embedding autoencoder with a predictor (HEAP) as efficient architecture for learning long-term evolution of complex multi-scale physical systems2025-05-24SW-ViT: A Spatio-Temporal Vision Transformer Network with Post Denoiser for Sequential Multi-Push Ultrasound Shear Wave Elastography2025-05-24The Cell Must Go On: Agar.io for Continual Reinforcement Learning2025-05-23