TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Ghost Bottleneck

Ghost Bottleneck

Computer VisionIntroduced 200024 papers
Source Paper

Description

A Ghost BottleNeck is a skip connection block, similar to the basic residual block in ResNet in which several convolutional layers and shortcuts are integrated, but stacks Ghost Modules instead (two stacked Ghost modules). It was proposed as part of the GhostNet CNN architecture.

The first Ghost module acts as an expansion layer increasing the number of channels. The ratio between the number of the output channels and that of the input is referred to as the expansion ratio. The second Ghost module reduces the number of channels to match the shortcut path. Then the shortcut is connected between the inputs and the outputs of these two Ghost modules. The batch normalization (BN) and ReLU nonlinearity are applied after each layer, except that ReLU is not used after the second Ghost module as suggested by MobileNetV2. The Ghost bottleneck described above is for stride=1. As for the case where stride=2, the shortcut path is implemented by a downsampling layer and a depthwise convolution with stride=2 is inserted between the two Ghost modules. In practice, the primary convolution in Ghost module here is pointwise convolution for its efficiency.

Papers Using This Method

Cross-video Identity Correlating for Person Re-identification Pre-training2024-09-27A Lightweight Insulator Defect Detection Model Based on Drone Images2024-08-26IDD-YOLOv5: A Lightweight Insulator Defect Real-time Detection Algorithm2024-08-19LiteYOLO-ID: A Lightweight Object Detection Network for Insulator Defect Detection2024-06-24Multimodal Emotion Recognition based on Facial Expressions, Speech, and EEG2024-06-11Ghost-Stereo: GhostNet-based Cost Volume Enhancement and Aggregation for Stereo Matching Networks2024-05-23Short-Term Memory Convolutions2023-02-08GhostNetV2: Enhance Cheap Operation with Long-Range Attention2022-11-23RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization2022-11-11Network Amplification With Efficient MACs Allocation2022-07-01YOLOv5s-GTB: light-weighted and improved YOLOv5s for bridge crack detection2022-06-03MoCoViT: Mobile Convolutional Vision Transformer2022-05-25Efficient Convolutional Neural Networks on Raspberry Pi for Image Classification2022-04-02ThreshNet: An Efficient DenseNet Using Threshold Mechanism to Reduce Connections2022-01-09GhostShiftAddNet: More Features from Energy-Efficient Operations2021-09-20Greedy Network Enlarging2021-07-31AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition2021-02-10Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet2021-01-28GhostSR: Learning Ghost Features for Efficient Image Super-Resolution2021-01-21A Multi-task Joint Framework for Real-time Person Search2020-12-11