TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Grid Sensitive

Grid Sensitive

Computer VisionIntroduced 2000102 papers
Source Paper

Description

Grid Sensitive is a trick for object detection introduced by YOLOv4. When we decode the coordinate of the bounding box center xxx and yyy, in original YOLOv3, we can get them by

x=s⋅(g_x+σ(p_x))y=s⋅(g_y+σ(p_y))\begin{aligned} &x=s \cdot\left(g\_{x}+\sigma\left(p\_{x}\right)\right) \\ &y=s \cdot\left(g\_{y}+\sigma\left(p\_{y}\right)\right) \end{aligned}​x=s⋅(g_x+σ(p_x))y=s⋅(g_y+σ(p_y))​

where σ\sigmaσ is the sigmoid function, g_xg\_{x}g_x and g_yg\_{y}g_y are integers and sss is a scale factor. Obviously, xxx and yyy cannot be exactly equal to s⋅g_xs \cdot g\_{x}s⋅g_x or s⋅(g_x+1)s \cdot\left(g\_{x}+1\right)s⋅(g_x+1). This makes it difficult to predict the centres of bounding boxes that just located on the grid boundary. We can address this problem, by changing the equation to

x=s⋅(g_x+α⋅σ(p_x)−(α−1)/2)y=s⋅(g_y+α⋅σ(p_y)−(α−1)/2)\begin{aligned} &x=s \cdot\left(g\_{x}+\alpha \cdot \sigma\left(p\_{x}\right)-(\alpha-1) / 2\right) \\ &y=s \cdot\left(g\_{y}+\alpha \cdot \sigma\left(p\_{y}\right)-(\alpha-1) / 2\right) \end{aligned}​x=s⋅(g_x+α⋅σ(p_x)−(α−1)/2)y=s⋅(g_y+α⋅σ(p_y)−(α−1)/2)​

This makes it easier for the model to predict bounding box center exactly located on the grid boundary. The FLOPs added by Grid Sensitive are really small, and can be totally ignored.

Papers Using This Method

Pattern-Based Phase-Separation of Tracer and Dispersed Phase Particles in Two-Phase Defocusing Particle Tracking Velocimetry2025-06-22Event-Based Crossing Dataset (EBCD)2025-03-21WalnutData: A UAV Remote Sensing Dataset of Green Walnuts and Model Evaluation2025-02-27YOLOv4: A Breakthrough in Real-Time Object Detection2025-02-06Vision-Integrated LLMs for Autonomous Driving Assistance : Human Performance Comparison and Trust Evaluation2025-02-06SPFFNet: Strip Perception and Feature Fusion Spatial Pyramid Pooling for Fabric Defect Detection2025-02-03Efficient Object Detection of Marine Debris using Pruned YOLO Model2025-01-27Object Detection Approaches to Identifying Hand Images with High Forensic Values2024-12-21Exploring Machine Learning Engineering for Object Detection and Tracking by Unmanned Aerial Vehicle (UAV)2024-12-19UICE-MIRNet guided image enhancement for underwater object detection2024-09-24YOLO-Former: YOLO Shakes Hand With ViT2024-01-11HyperSense: Hyperdimensional Intelligent Sensing for Energy-Efficient Sparse Data Processing2024-01-04Toward Improving Robustness of Object Detectors Against Domain Shift2023-12-02YOLOv5s-BC: An improved YOLOv5s-based method for real-time apple detection2023-11-10Application of deep learning for livestock behaviour recognition: A systematic literature review2023-10-20Parking Spot Classification based on surround view camera system2023-10-05Randomize to Generalize: Domain Randomization for Runway FOD Detection2023-09-23Automatic Signboard Recognition in Low Quality Night Images2023-08-17YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems2023-07-26Group channel pruning and spatial attention distilling for object detection2023-06-02