Why mask diffusion does not work

Haocheng Sun, Cynthia Xin Wen, Edward Hong Wang

公開日: 2025/9/29

Abstract

The main advantages of diffusion language models over autoregressive (AR) models lie in their ability to support parallel generation and bidirectional attention, enabling a more controllable generation process. In recent years, open-source mask diffusion language models have emerged, most of which are based on a variant known as absorbing diffusion. However, this paper demonstrates why mask diffusion faces inherent difficulties in achieving parallel generation and bidirectional attention. We also propose the most effective training and inference strategies for mask diffusion.

Why mask diffusion does not work | SummarXiv | SummarXiv