TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Exploring Sequence Feature Alignment for Domain Adaptive D...

Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers

Wen Wang, Yang Cao, Jing Zhang, Fengxiang He, Zheng-Jun Zha, Yonggang Wen, DaCheng Tao

2021-07-27Source Free Object DetectionRobust Object Detectionobject-detectionObject DetectionDomain Adaptation
PaperPDFCode(official)

Abstract

Detection transformers have recently shown promising object detection results and attracted increasing attention. However, how to develop effective domain adaptation techniques to improve its cross-domain performance remains unexplored and unclear. In this paper, we delve into this topic and empirically find that direct feature distribution alignment on the CNN backbone only brings limited improvements, as it does not guarantee domain-invariant sequence features in the transformer for prediction. To address this issue, we propose a novel Sequence Feature Alignment (SFA) method that is specially designed for the adaptation of detection transformers. Technically, SFA consists of a domain query-based feature alignment (DQFA) module and a token-wise feature alignment (TDA) module. In DQFA, a novel domain query is used to aggregate and align global context from the token sequence of both domains. DQFA reduces the domain discrepancy in global feature representations and object relations when deploying in the transformer encoder and decoder, respectively. Meanwhile, TDA aligns token features in the sequence from both domains, which reduces the domain gaps in local and instance-level feature representations in the transformer encoder and decoder, respectively. Besides, a novel bipartite matching consistency loss is proposed to enhance the feature discriminability for robust object detection. Experiments on three challenging benchmarks show that SFA outperforms state-of-the-art domain adaptive object detection methods. Code has been made available at: https://github.com/encounter1997/SFA.

Results

TaskDatasetMetricValueModel
Domain AdaptationInBreastAUC0.06Mexforer
Domain AdaptationInBreastF1-score0.09Mexforer
Domain AdaptationInBreastR@0.050.02Mexforer
Domain AdaptationInBreastR@0.30.03Mexforer
Domain AdaptationInBreastR@0.50.03Mexforer
Domain AdaptationInBreastR@1.00.03Mexforer
Source-Free Domain AdaptationInBreastAUC0.06Mexforer
Source-Free Domain AdaptationInBreastF1-score0.09Mexforer
Source-Free Domain AdaptationInBreastR@0.050.02Mexforer
Source-Free Domain AdaptationInBreastR@0.30.03Mexforer
Source-Free Domain AdaptationInBreastR@0.50.03Mexforer
Source-Free Domain AdaptationInBreastR@1.00.03Mexforer

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15Domain Borders Are There to Be Crossed With Federated Few-Shot Adaptation2025-07-14