Abstract
The basic question of delineating those statistical problems that are solvable without making any assumptions on the underlying data distribution has long animated statistics and learning theory. This paper characterizes when a convex M-estimation or stochastic optimization problem is solvable in such an assumption-free setting, providing a precise dividing line between solvable and unsolvable problems. The conditions we identify show, perhaps surprisingly, that Lipschitz continuity of the loss being minimized is not necessary for distribution free minimization, and they are also distinct from classical characterizations of learnability in machine learning.
Related Papers
First-order methods for stochastic and finite-sum convex optimization with deterministic constraints2025-06-25Convergence of Momentum-Based Optimization Algorithms with Time-Varying Parameters2025-06-13Underage Detection through a Multi-Task and MultiAge Approach for Screening Minors in Unconstrained Imagery2025-06-12The Sample Complexity of Parameter-Free Stochastic Convex Optimization2025-06-12"What are my options?": Explaining RL Agents with Diverse Near-Optimal Alternatives (Extended)2025-06-11PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning2025-05-28Online distributed optimization for spatio-temporally constrained real-time peer-to-peer energy trading2025-05-28Dynamically Learned Test-Time Model Routing in Language Model Zoos with Service Level Guarantees2025-05-26