Poly-DPO and ViPO: Scaling Visual Preference Optimization

Researchers introduced Poly-DPO, an algorithmic extension to preference optimization that adds a polynomial term to handle noisy preference data, and ViPO, a large-scale dataset of 1M image pairs and 300K video pairs with balanced distributions and high-quality signals. The work addresses a core bottleneck in scaling visual generative models: existing preference datasets contain conflicting patterns and quality issues that prevent effective learning. When applied to high-quality data, Poly-DPO converges to standard DPO, validating both the dataset quality and the algorithm's adaptive design.
TL;DR
- →Poly-DPO extends the DPO objective with a polynomial term that dynamically adjusts model confidence based on dataset noise characteristics, improving robustness on imperfect data
- →ViPO dataset contains 1M image pairs at 1024px resolution across five categories and 300K video pairs at 720p+ across three categories, with balanced distributions and state-of-the-art generative models as reference
- →On noisy datasets like Pick-a-Pic V2, Poly-DPO achieves 6.87 point gains over Diffusion-DPO on GenEval for SD1.5 and 2.32 points for SDXL, demonstrating practical improvements on existing benchmarks
- →The convergence of Poly-DPO to standard DPO on high-quality data suggests that algorithmic sophistication matters most when data quality is limited, but becomes unnecessary with sufficient dataset quality
Why it matters
Preference optimization is a key lever for improving generative model outputs, but scaling this approach has been hampered by noisy, low-resolution, and imbalanced datasets that contain conflicting preference signals. This work tackles both the algorithmic and data sides of the problem, providing a method that handles noise gracefully and a large, high-quality benchmark that can anchor future research. The finding that sophisticated optimization converges to simpler methods on clean data offers a useful principle for understanding when algorithmic complexity is justified.
Business relevance
For teams building or fine-tuning visual generative models, this work provides both a practical algorithm for handling imperfect training data and a reference dataset for benchmarking preference optimization approaches. The demonstrated gains on existing models like Stable Diffusion suggest that preference optimization can meaningfully improve output quality without retraining from scratch, making it a cost-effective path to model improvement for operators.
Key implications
- →Data quality and algorithmic robustness are complementary levers for scaling preference optimization, not substitutes; addressing both is necessary for reliable scaling
- →Existing open-source preference datasets have fundamental limitations in resolution, diversity, and balance that constrain model performance, creating a market opportunity for higher-quality curated datasets
- →The adaptive nature of Poly-DPO suggests a broader principle: optimization methods should degrade gracefully on noisy data rather than assuming clean signals, a pattern likely applicable beyond visual generation
What to watch
Monitor whether ViPO becomes a standard benchmark for visual preference optimization research and whether Poly-DPO's approach of adaptive confidence adjustment spreads to other domains like language model alignment. Also track whether the quality gains from preference optimization on visual models translate to commercial improvements in deployed systems, which would validate the practical value of this line of work.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



