Recent advances in text-to-image generative models have enabled numerous practical applications, including subject-driven generation, which fine-tunes pretrained models to capture subject semantics from only a few examples. While diffusion-based models produce high-quality images, their extensive denoising steps result in significant computational overhead, limiting real-world applicability. Visual autoregressive (VAR) models, which predict next-scale tokens rather than spatially adjacent ones, offer significantly faster inference suitable for practical deployment. In this paper, we propose the first VAR-based approach for subject-driven generation. However, naive fine-tuning VAR leads to computational overhead, language drift, and reduced diversity. To address these challenges, we introduce selective layer tuning to reduce complexity and prior distillation to mitigate language drift. Additionally, we found that the early stages have a greater influence on the generation of subject than the latter stages, which merely synthesize local details. Based on this finding, we propose scale-wise weighted tuning, which prioritizes coarser resolutions for promoting the model to focus on the subject-relevant information instead of local details. Extensive experiments validate that our method significantly outperforms diffusion-based baselines across various metrics and demonstrates its practical usage.
(Left) Subject-driven fine-tuning. A visual tokenizer encodes each subject image into K multi-scale token maps \((r_{1},\ldots,r_{K})\). The VAR transformer is fine-tuned to reconstruct these maps \((\hat{r}_{1},\ldots,\hat{r}_{K})\) while updating only the cross-attention (CA) and feed-forward network (FFN) layers. A scale-weighted cross-entropy loss \(L_{\mathrm{wCE}}\) emphasises coarse scales that capture key subject semantics.
(Right) Prior distillation. To mitigate language drift and encourage diversity, token maps generated by the pretrained transformer \(\theta_{\text{orig}}\) from a class-noun prompt \(c_{\text{cls}}\) (e.g., “dog”) serve as soft targets. A distillation loss \(L_{\text{distill}}\) keeps the fine-tuned model close to the original semantic prior.
Summary: CA and FFN layers are optimised jointly with \(L_{\mathrm{wCE}}\) and \(L_{\text{distill}}\), balancing subject fidelity and generative consistency.
@article{chung2025fine,
title={Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation},
author={Chung, Jiwoo and Hyun, Sangeek and Kim, Hyunjun and Koh, Eunseo and Lee, MinKyu and Heo, Jae-Pil},
journal={arXiv preprint arXiv:2504.02612},
year={2025}
}