VideoGPA: Distilling Geometry Priors for 3D-Consistent Video Generation

Abstract

While recent video diffusion models produce visually impressive results, they fundamentally struggle to maintain 3D structural consistency, often resulting in object deformation or spatial drift. We hypothesize that these failures arise because standard denoising objectives lack explicit incentives for geometric coherence. To address this, we introduce VideoGPA (Video Geometric Preference Alignment), a data-efficient self-supervised framework that leverages geometry foundation model to automatically derive dense preference signals to guide VDMs via Direct Preference Optimization. This approach effectively steers the generative distribution toward inherent 3D consistency without requiring human annotations. VideoGPA significantly enhances temporal stability, phyiscal plausibility, and motion coherence using minimal preference pairs, consistently outperforming state-of-the-art baselines in extensive experiments.

Junjie Ye
Junjie Ye
PhD Student in Computer Science

My research interests include computer vision and robotics.