Sparsified-Learning for Heavy-Tailed Locally Stationary Processes
Yingjie Wang, Mokhtar Z. Alaya, Salim Bouzebda, Xinsheng Liu
2025-04-08Sparse Learning
Abstract
Sparsified Learning is ubiquitous in many machine learning tasks. It aims to regularize the objective function by adding a penalization term that considers the constraints made on the learned parameters. This paper considers the problem of learning heavy-tailed LSP. We develop a flexible and robust sparse learning framework capable of handling heavy-tailed data with locally stationary behavior and propose concentration inequalities. We further provide non-asymptotic oracle inequalities for different types of sparsity, including $\ell_1$-norm and total variation penalization for the least square loss.
Related Papers
Scalable Subset Selection in Linear Mixed Models2025-06-25Learning Sparsity for Effective and Efficient Music Performance Question Answering2025-06-02Optimizing Hard Thresholding for Sparse Model Discovery2025-04-28STARS: Sparse Learning Correlation Filter with Spatio-temporal Regularization and Super-resolution Reconstruction for Thermal Infrared Target Tracking2025-04-20Low-Rank Matrix Regression via Least-Angle Regression2025-03-13Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries2025-02-04Nonparametric Sparse Online Learning of the Koopman Operator2025-01-27MyESL: Sparse learning in molecular evolution and phylogenetic analysis2025-01-09