Reveal and Release: Iterative LLM Unlearning with Self-generated Data

Linxi Xie, Xin Teng, Shichang Ke, Hongyi Wen, Shengjie Wang

公開日: 2025/9/18

Abstract

Large language model (LLM) unlearning has demonstrated effectiveness in removing the influence of undesirable data (also known as forget data). Existing approaches typically assume full access to the forget dataset, overlooking two key challenges: (1) Forget data is often privacy-sensitive, rare, or legally regulated, making it expensive or impractical to obtain (2) The distribution of available forget data may not align with how that information is represented within the model. To address these limitations, we propose a ``Reveal-and-Release'' method to unlearn with self-generated data, where we prompt the model to reveal what it knows using optimized instructions. To fully utilize the self-generated forget data, we propose an iterative unlearning framework, where we make incremental adjustments to the model's weight space with parameter-efficient modules trained on the forget data. Experimental results demonstrate that our method balances the tradeoff between forget quality and utility preservation.

Reveal and Release: Iterative LLM Unlearning with Self-generated Data | SummarXiv | SummarXiv