PUREVQ-GAN: Defending Data Poisoning Attacks through Vector-Quantized Bottlenecks

Alexander Branch, Omead Pooladzandi, Radin Khosraviani, Sunay Gajanan Bhat, Jeffrey Jiang, Gregory Pottie

Published: 2025/9/30

Abstract

We introduce PureVQ-GAN, a defense against data poisoning that forces backdoor triggers through a discrete bottleneck using Vector-Quantized VAE with GAN discriminator. By quantizing poisoned images through a learned codebook, PureVQ-GAN destroys fine-grained trigger patterns while preserving semantic content. A GAN discriminator ensures outputs match the natural image distribution, preventing reconstruction of out-of-distribution perturbations. On CIFAR-10, PureVQ-GAN achieves 0% poison success rate (PSR) against Gradient Matching and Bullseye Polytope attacks, and 1.64% against Narcissus while maintaining 91-95% clean accuracy. Unlike diffusion-based defenses requiring hundreds of iterative refinement steps, PureVQ-GAN is over 50x faster, making it practical for real training pipelines.

PUREVQ-GAN: Defending Data Poisoning Attacks through Vector-Quantized Bottlenecks | SummarXiv | SummarXiv