Adversarial Training: Enhancing Out-of-Distribution Generalization for Learning Wireless Resource Allocation
ShengJie Liu, Chenyang Yang
2025-06-26Out-of-Distribution Generalization
Abstract
Deep neural networks (DNNs) have widespread applications for optimizing resource allocation. Yet, their performance is vulnerable to distribution shifts between training and test data, say channels. In this letter, we resort to adversarial training (AT) for enhancing out-of-distribution (OOD) generalizability of DNNs trained in unsupervised manner. We reformulate AT to capture the OOD degradation, and propose a one-step gradient ascent method for AT. The proposed method is validated by optimizing hybrid precoding. Simulation results showcase the enhanced OOD performance of multiple kinds of DNNs across various channel distributions, when only Rayleigh fading channels are used for training.
Related Papers
Exploring Graph-Transformer Out-of-Distribution Generalization Abilities2025-06-25Balanced Hyperbolic Embeddings Are Natural Out-of-Distribution Detectors2025-06-11Reinforce LLM Reasoning through Multi-Agent Reflection2025-06-10NeurIPS 2024 ML4CFD Competition: Results and Retrospective Analysis2025-06-10Dealing with the Evil Twins: Improving Random Augmentation by Addressing Catastrophic Forgetting of Diverse Augmentations2025-06-09MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs2025-06-09Pruning Spurious Subgraphs for Graph Out-of-Distribtuion Generalization2025-06-06Positional Encoding meets Persistent Homology on Graphs2025-06-06