TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/GhostNet

GhostNet

Computer VisionIntroduced 200022 papers
Source Paper

Description

A GhostNet is a type of convolutional neural network that is built using Ghost modules, which aim to generate more features by using fewer parameters (allowing for greater efficiency).

GhostNet mainly consists of a stack of Ghost bottlenecks with the Ghost modules as the building block. The first layer is a standard convolutional layer with 16 filters, then a series of Ghost bottlenecks with gradually increased channels follow. These Ghost bottlenecks are grouped into different stages according to the sizes of their input feature maps. All the Ghost bottlenecks are applied with stride=1 except that the last one in each stage is with stride=2. At last a global average pooling and a convolutional layer are utilized to transform the feature maps to a 1280-dimensional feature vector for final classification. The squeeze and excite (SE) module is also applied to the residual layer in some ghost bottlenecks.

In contrast to MobileNetV3, GhostNet does not use hard-swish nonlinearity function due to its large latency.

Papers Using This Method

Cross-video Identity Correlating for Person Re-identification Pre-training2024-09-27A Lightweight Insulator Defect Detection Model Based on Drone Images2024-08-26Multimodal Emotion Recognition based on Facial Expressions, Speech, and EEG2024-06-11Ghost-Stereo: GhostNet-based Cost Volume Enhancement and Aggregation for Stereo Matching Networks2024-05-23Short-Term Memory Convolutions2023-02-08GhostNetV2: Enhance Cheap Operation with Long-Range Attention2022-11-23RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization2022-11-11Network Amplification With Efficient MACs Allocation2022-07-01YOLOv5s-GTB: light-weighted and improved YOLOv5s for bridge crack detection2022-06-03MoCoViT: Mobile Convolutional Vision Transformer2022-05-25Efficient Convolutional Neural Networks on Raspberry Pi for Image Classification2022-04-02ThreshNet: An Efficient DenseNet Using Threshold Mechanism to Reduce Connections2022-01-09GhostShiftAddNet: More Features from Energy-Efficient Operations2021-09-20Greedy Network Enlarging2021-07-31AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition2021-02-10Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet2021-01-28GhostSR: Learning Ghost Features for Efficient Image Super-Resolution2021-01-21A Multi-task Joint Framework for Real-time Person Search2020-12-11Real-time Semantic Segmentation with Context Aggregation Network2020-11-02Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets2020-10-28