Self-Distilled RLVR: Summary
Summary (Overview)
- Identifies a fundamental flaw in On-Policy Self-Distillation (OPSD): The information asymmetry between a teacher (with privileged information) and a student (without it) creates an irreducible mutual information gap in the distribution-matching objective. This leads to privileged information leakage and eventual performance degradation.
- Proposes RLSD (Reinforcement Learning with Self-Distillation): A new paradigm that repurposes self-distillation from a distribution-matching target to a token-level credit assignment mechanism. The environment reward determines the direction of updates (reinforce/penalize), while the privileged teacher's evidence ratio modulates the update magnitude per token.
- Achieves superior performance and stability: RLSD unifies the reliable direction of RLVR (e.g., GRPO) with the dense, fine-grained signals of self-distillation. It achieves higher convergence ceilings, faster training, and avoids the leakage and collapse seen in OPSD, as validated on multimodal reasoning benchmarks.
Introduction and Theoretical Foundation
Reinforcement Learning with Verifiable Rewards (RLVR) methods like GRPO train models using sparse, sequence-level rewards (e.g., answer correctness). On-Policy Distillation (OPD) complements this by using a stronger external teacher model to provide dense, token-level supervision, but incurs high computational cost. On-Policy Self-Distillation (OPSD) emerged as an efficient alternative, where a single model acts as both teacher (conditioned on privileged information , like a reference answer) and student (conditioned only on the query ).
However, this paper demonstrates that OPSD suffers from systematic privileged information leakage—the model begins to reference invisible "reference solutions" during inference—and unstable long-term training, where performance peaks early then degrades.
The theoretical foundation explains this failure. In OPD, teacher and student are information-symmetric (same input). In OPSD, they are information-asymmetric: the teacher conditions on , which the student cannot observe. This makes the distribution-matching objective ill-posed.
Theorem 1 (KL Decomposition) formalizes the problem. The OPSD objective and the ideal objective (matching the teacher's marginal distribution) are related by:
where is the conditional mutual information between the current token and the privileged information . This term is strictly positive and independent of the student's parameters , creating an irreducible optimization gap that drives leakage.
Methodology
The core insight is to decouple the roles of the environment reward and the teacher signal:
- Update Direction: Must be reliable and determined by the verifiable environment reward.
- Update Magnitude: Benefits from being dense and fine-grained, provided by the teacher.
RLSD (Algorithm 1) implements this as follows:
-
On-Policy Rollout & Sequence-Level Advantage: For a query , sample responses from the student policy . A verifier provides a binary reward . Compute a group-relative advantage for each response:
where are the mean and standard deviation of rewards in the group.
-
Token-Level Credit Assignment via Self-Distillation: For each token in a trajectory:
- Compute the privileged information gain: where is the stop-gradient operator, , and .
- Construct a direction-aware evidence weight: This weight has a Bayesian interpretation as the belief update ratio .
- Apply clipping for stability and interpolate with a uniform baseline: The mixing coefficient decays from 0.5 to 0 over early training.
-
Policy Update: Update the parameters by maximizing the RLSD objective:
RLSD requires only one extra forward pass (for the teacher logits) and serves as a drop-in replacement for the uniform advantage in GRPO.
Empirical Validation / Results
Experiments were conducted on the Qwen3-VL-8B-Instruct model, trained on the challenging MMFineReason-123K dataset and evaluated on five multimodal reasoning benchmarks.
Table 2: Multimodal reasoning results on the Qwen3-VL-8B-Instruct model.
| Method | MMMU | MathVista | MathVision | ZeroBench | Wemath | Avg. |
|---|---|---|---|---|---|---|
| Base LLM | 62.44 | 73.80 | 47.37 | 19.76 | 54.10 | 51.49 |
| GRPO | 65.11 | 76.20 | 48.82 | 22.60 | 56.57 | 53.86 |
| OPSD | 63.82 | 75.10 | 47.53 | 21.06 | 54.95 | 52.49 |
| SDPO | 65.11 | 74.00 | 47.27 | 25.15 | 52.19 | 52.74 |
| GRPO+OPSD | 63.22 | 75.90 | 48.52 | 22.16 | 54.76 | 52.91 |
| RLSD (Ours) | 67.22 | 78.10 | 52.73 | 24.85 | 58.00 | 56.18 |
Key Findings:
- RLSD achieves the highest average accuracy (56.18%), outperforming the base LLM by +4.69% and GRPO by +2.32%.
- Training Dynamics (Figure 5): RLSD shows a steeper initial ascent and higher final reward than GRPO, while avoiding OPSD's late-stage collapse. It also maintains higher policy entropy than GRPO, indicating less uniform token suppression.
- Case Study (Figure 6): Token-level credit heatmaps show RLSD successfully assigns larger credit/blame to decisive reasoning steps (e.g., key calculations) and down-weights generic narration or neutral setup tokens.
Theoretical and Practical Implications
- Theoretical: Provides a formal analysis of why information-asymmetric distribution matching (OPSD) fails, framing it as an ill -posed objective with an irreducible mutual information gap. Introduces the "impossibility trilemma" for shared-parameter self-distillation: objective stability, sustained improvement, and leakage-free training cannot all be achieved simultaneously under distribution matching. RLSD resolves this trilemma.
- Practical: RLSD offers a computationally efficient, drop-in improvement for existing RLVR pipelines like GRPO. It requires only the final ground-truth answer as privileged information (no reasoning traces), provides fine-grained credit assignment without training auxiliary models, and ensures training stability anchored to environmental feedback.
Conclusion
This work diagnoses the fundamental limitations of on-policy self-distillation, proving that information asymmetry leads to an ill-posed objective and inevitable leakage. The proposed RLSD paradigm circumvents these issues by repurposing self-distillation from a generative target to a credit assignment modulator. By anchoring update directions to the environment reward and using the teacher's evidence ratio to control token-level magnitudes, RLSD unifies the strengths of RLVR and self-distillation, achieving higher performance, faster convergence, and robust training stability. Future work will explore RLSD's applicability to broader domains beyond multimodal reasoning.