ROME

Rank-One Model Editing

GeneralIntroduced 200023 papers

Papers Using This Method

R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing2025-05-27Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?2025-05-26Related Knowledge Perturbation Matters: Rethinking Multiple Pieces of Knowledge Editing in Same-Subject2025-02-08ROME: Robust Model Ensembling for Semantic Communication Against Semantic Jamming Attacks2025-01-02Understanding the Collapse of LLMs in Model Editing2024-06-17Is Bigger Edit Batch Size Always Better? -- An Empirical Study on Model Editing with Llama-32024-05-01A Unified Framework for Model Editing2024-03-21Rebuilding ROME : Resolving Model Collapse during Sequential Model Editing2024-03-11ROME: Memorization Insights from Text, Logits and Representation2024-03-01How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries2024-02-23Robust Multi-Modal Density Estimation2024-01-19Model Editing at Scale leads to Gradual and Catastrophic Forgetting2024-01-15A Comprehensive Study of Knowledge Editing for Large Language Models2024-01-02SHARE: Single-view Human Adversarial REconstruction2023-12-30Trace and Edit Relation Associations in GPT2023-12-30Optimizing Fault-Tolerant Quality-Guaranteed Sensor Deployments for UAV Localization in Critical Areas via Computational Geometry2023-12-05ROME: Evaluating Pre-trained Vision-Language Models on Reasoning beyond Visual Common Sense2023-10-30Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks2023-09-29RoMe: Towards Large Scale Road Surface Reconstruction via Mesh Representation2023-06-20RoME: Role-aware Mixture-of-Expert Transformer for Text-to-Video Retrieval2022-06-26