TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Boosting Domain Generalized and Adaptive Detection with Di...

Boosting Domain Generalized and Adaptive Detection with Diffusion Models: Fitness, Generalization, and Transferability

Boyong He, Yuxiang Ji, Zhuoyue Tan, Liaoni Wu

2025-06-26Domain GeneralizationRobust Object Detection
PaperPDFCode(official)

Abstract

Detectors often suffer from performance drop due to domain gap between training and testing data. Recent methods explore diffusion models applied to domain generalization (DG) and adaptation (DA) tasks, but still struggle with large inference costs and have not yet fully leveraged the capabilities of diffusion models. We propose to tackle these problems by extracting intermediate features from a single-step diffusion process, improving feature collection and fusion to reduce inference time by 75% while enhancing performance on source domains (i.e., Fitness). Then, we construct an object-centered auxiliary branch by applying box-masked images with class prompts to extract robust and domain-invariant features that focus on object. We also apply consistency loss to align the auxiliary and ordinary branch, balancing fitness and generalization while preventing overfitting and improving performance on target domains (i.e., Generalization). Furthermore, within a unified framework, standard detectors are guided by diffusion detectors through feature-level and object-level alignment on source domains (for DG) and unlabeled target domains (for DA), thereby improving cross-domain detection performance (i.e., Transferability). Our method achieves competitive results on 3 DA benchmarks and 5 DG benchmarks. Additionally, experiments on COCO generalization benchmark demonstrate that our method maintains significant advantages and show remarkable efficiency in large domain shifts and low-data scenarios. Our work shows the superiority of applying diffusion models to domain generalized and adaptive detection tasks and offers valuable insights for visual perception tasks across diverse domains. The code is available at \href{https://github.com/heboyong/Fitness-Generalization-Transferability}{Fitness-Generalization-Transferability}.

Results

TaskDatasetMetricValueModel
Object DetectionCityscapesmPC [AP]27.4FGT (SD-1.5 Backbone)
Object DetectionCityscapesmPC [AP]22.1FGT (R101, Faster RCNN)
3DCityscapesmPC [AP]27.4FGT (SD-1.5 Backbone)
3DCityscapesmPC [AP]22.1FGT (R101, Faster RCNN)
2D ClassificationCityscapesmPC [AP]27.4FGT (SD-1.5 Backbone)
2D ClassificationCityscapesmPC [AP]22.1FGT (R101, Faster RCNN)
2D Object DetectionCityscapesmPC [AP]27.4FGT (SD-1.5 Backbone)
2D Object DetectionCityscapesmPC [AP]22.1FGT (R101, Faster RCNN)
16kCityscapesmPC [AP]27.4FGT (SD-1.5 Backbone)
16kCityscapesmPC [AP]22.1FGT (R101, Faster RCNN)

Related Papers

Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion2025-07-08Prompt-Free Conditional Diffusion for Multi-object Image Augmentation2025-07-08Integrated Structural Prompt Learning for Vision-Language Models2025-07-08