TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/CondConv

CondConv

Computer VisionIntroduced 20005 papers
Source Paper

Description

CondConv, or Conditionally Parameterized Convolutions, are a type of convolution which learn specialized convolutional kernels for each example. In particular, we parameterize the convolutional kernels in a CondConv layer as a linear combination of nnn experts (α1W1+…+αnWn)∗x(\alpha_1 W_1 + \ldots + \alpha_n W_n) * x(α1​W1​+…+αn​Wn​)∗x, where α1,…,αn\alpha_1, \ldots, \alpha_nα1​,…,αn​ are functions of the input learned through gradient descent. To efficiently increase the capacity of a CondConv layer, developers can increase the number of experts. This can be more computationally efficient than increasing the size of the convolutional kernel itself, because the convolutional kernel is applied at many different positions within the input, while the experts are combined only once per input.

Papers Using This Method

Frequency Dynamic Convolution for Dense Image Prediction2025-03-24Collaboration of Experts: Achieving 80% Top-1 Accuracy on ImageNet with 100M FLOPs2021-07-08EXTENDING CONDITIONAL CONVOLUTION STRUCTURES FOR ENHANCING MULTITASKING CONTINUAL LEARNING2020-12-07WeightNet: Revisiting the Design Space of Weight Networks2020-07-23CondConv: Conditionally Parameterized Convolutions for Efficient Inference2019-04-10