Description
Dynamic sparse training methods train neural networks in a sparse manner, starting with an initial sparse mask, and periodically updating the mask based on some criteria.
Papers Using This Method
Beyond Single-User Dialogue: Assessing Multi-User Dialogue State Tracking Capabilities of Large Language Models2025-06-12Factors affecting the in-context learning abilities of LLMs for dialogue state tracking2025-06-10Discovering Governing Equations of Geomagnetic Storm Dynamics with Symbolic Regression2025-04-25Interpretable and Robust Dialogue State Tracking via Natural Language Summarization with LLMs2025-03-11Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected2025-01-31Know Your Mistakes: Towards Preventing Overreliance on Task-Oriented Conversational AI Through Accountability Modeling2025-01-17Prediction of Geoeffective CMEs Using SOHO Images and Deep Learning2025-01-02Dynamic Stereotype Theory Induced Micro-expression Recognition with Oriented Deformation2025-01-01Intent-driven In-context Learning for Few-shot Dialogue State Tracking2024-12-04Sparser Training for On-Device Recommendation Systems2024-11-19Navigating Extremes: Dynamic Sparsity in Large Output Space2024-11-05Reliability Assessment of Information Sources Based on Random Permutation Set2024-10-30Beyond Ontology in Dialogue State Tracking for Goal-Oriented Chatbot2024-10-30Tracing Human Stress from Physiological Signals using UWB Radar2024-10-14Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training2024-09-28A Zero-Shot Open-Vocabulary Pipeline for Dialogue Understanding2024-09-24Confidence Estimation for LLM-Based Dialogue State Tracking2024-09-15Keyword-Aware ASR Error Augmentation for Robust Dialogue State Tracking2024-09-10Hierarchical Learning and Computing over Space-Ground Integrated Networks2024-08-26Imprecise Belief Fusion Facing a DST benchmark problem2024-08-16