TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Matrix NMS

Matrix NMS

Matrix Non-Maximum Suppression

Computer VisionIntroduced 20005 papers
Source Paper

Description

Matrix NMS, or Matrix Non-Maximum Suppression, performs non-maximum suppression with parallel matrix operations in one shot. It is motivated by Soft-NMS. Soft-NMS decays the other detection scores as a monotonic decreasing function f(iou)f(iou)f(iou) of their overlaps. By decaying the scores according to IoUs recursively, higher IoU detections will be eliminated with a minimum score threshold. However, such process is sequential like traditional Greedy NMS and can not be implemented in parallel.

Matrix NMS views this process from another perspective by considering how a predicted mask m_jm\_{j}m_j being suppressed. For m_jm\_{j}m_j, its decay factor is affected by: (a) The penalty of each prediction m_im\_{i}m_i on m_jm\_{j}m_j (s_i>s_j)\left(s\_{i}>s\_{j}\right)(s_i>s_j), where s_is\_{i}s_i and s_js\_{j}s_j are the confidence scores; and (b) the probability of m_im\_{i}m_i being suppressed. For (a), the penalty of each prediction m_im\_{i}m_i on m_jm\_{j}m_j could be easily computed by f(f\left(\right.f( iou _i,j)\left.\_{i, j}\right)_i,j). For (b), the probability of m_im\_{i}m_i being suppressed is not so elegant to be computed. However, the probability usually has positive correlation with the IoUs. So here we directly approximate the probability by the most overlapped prediction on m_im\_{i}m_i as

f( iou. _,i)=min⁡_∀s_k>s_if( iou _k,i)f\left(\text { iou. }\_{, i}\right)=\min\_{\forall s\_{k}>s\_{i}} f\left(\text { iou }\_{k, i}\right)f( iou. _,i)=min_∀s_k>s_if( iou _k,i)

To this end, the final decay factor becomes

decay⁡_j=min⁡_∀s_i>s_jf( iou _i,j)f( iou _⋅,i)\operatorname{decay}\_{j}=\min\_{\forall s\_{i}>s\_{j}} \frac{f\left(\text { iou }\_{i, j}\right)}{f\left(\text { iou }\_{\cdot, i}\right)}decay_j=min_∀s_i>s_jf( iou _⋅,i)f( iou _i,j)​

and the updated score is computed by s_j=s_j⋅s\_{j}=s\_{j} \cdots_j=s_j⋅ decay _j.\_{j} ._j. The authors consider the two most simple decremented functions, denoted as linear f(f\left(\right.f( iou _i,j)=1−\left.\_{i, j}\right)=1-_i,j)=1− iou _i,j\_{i, j}_i,j, and Gaussian f(f\left(\right.f( iou _i,j)=exp⁡(−iou_i,j2σ)\left.\_{i, j}\right)=\exp \left(-\frac{i o u\_{i, j}^{2}}{\sigma}\right)_i,j)=exp(−σiou_i,j2​).

Papers Using This Method

PP-YOLOE: An evolved version of YOLO2022-03-30In Defense of Kalman Filtering for Polyp Tracking from Colonoscopy Videos2022-01-27PP-YOLOv2: A Practical Object Detector2021-04-21PP-YOLO: An Effective and Efficient Implementation of Object Detector2020-07-23SOLOv2: Dynamic and Fast Instance Segmentation2020-03-23