FB-4D: Spatial-Temporal Coherent Dynamic 3D Content Generation with Feature Banks

Jinwei Li*1,2, Huan-ang Gao*1,2, Wenyi Li1, Haohan Chi1,2, Chenyu Liu1,2, Chenxi Du2, Yiqian Liu1,2, Mingju Gao1, Guiyu Zhang1, Zongzheng Zhang1, Li Yi3, Yao Yao4, Hongyang Li5, Jingwei Zhao6, Yikai Wang7, Hao Zhao†1
1 Institute for AI Industry Research (AIR), Tsinghua University     2 Department of Computer Science and Technology, Tsinghua University     3 Institute for Interdisciplinary Information Sciences, Tsinghua University    
4 Nanjing University     5 Shanghai AI Laboratory     6 Xiaomi Corporation 7 School of Artificial Intelligence, Beijing Normal University
*Indicates Equal Contribution
Indicates Corresponding Author

Abstract

With the rapid advancements in diffusion models and 3D generation techniques, dynamic 3D content generation has become a crucial research area. However, achieving high-fidelity 4D (dynamic 3D) generation with strong spatial-temporal consistency remains a challenging task. Inspired by recent findings that pretrained diffusion features capture rich correspondences, we propose FB-4D, a novel 4D generation framework that integrates a Feature Bank mechanism to enhance both spatial and temporal consistency in generated frames. In FB-4D, we store features extracted from previous frames and fuse them into the process of generating subsequent frames, ensuring consistent characteristics across both time and multiple views. To ensure a compact representation, the Feature Bank is updated by a proposed dynamic merging mechanism. Leveraging this Feature Bank, we demonstrate for the first time that generating additional reference sequences through multiple autoregressive iterations can continuously improve generation performance. Experimental results show that FB-4D significantly outperforms existing methods in terms of rendering quality, spatial-temporal consistency, and robustness. It surpasses all multi-view generation tuning-free approaches by a large margin and achieves performance on par with training-based methods. Our code and data will be publicly available to support future research.

Teaser

Our method achieves significantly higher spatiotemporal consistency than other training-free methods while attaining performance comparable to training-based methods.

Method

Overall pipeline for our proposed method.


Detailed illustration of our two key innovations
Part I explains the working mechanism of the feature bank. By integrating our proposed feature bank into the self-attention layer of Zero123++, we incorporate features from past frames into the current frame generation, enhancing spatial-temporal consistency. Part II introduces our iterative generation process. After each iteration, we select images within a specific viewpoint range as the candidate set for the next round. We then compute similarity scores with past viewpoints and use the highest-scoring image as the next input.

Results


Method T-F FVD (↓) CLIP (↑) LPIPS (↓)
SV4D 732.40 0.920 0.118
4Diffusion+ 1551.63 0.873 0.228
L4GM+ 1360.04 0.913 0.158
DS4D-GA+ 799.94 0.921 0.131
DS4D-DA+ 784.02 0.923 0.131
Consistent4D 1133.93 0.870 0.160
4DGen - 0.894 0.130
STAG4D 992.21 0.909 0.126
SC4D+ 852.98 0.912 0.137
MVTokenFlow 846.32 0.948 0.122
FB-4D (Ours) 724.26 0.913 0.125
Quantitative Comparison of Different Methods on Consistent4D Dataset (where T-F means training-free in the multi-view diffusion stage)

BibTeX

If you find our work useful in your research, please consider citing:
@misc{li2025fb4dspatialtemporalcoherentdynamic,
        title={FB-4D: Spatial-Temporal Coherent Dynamic 3D Content Generation with Feature Banks}, 
        author={Jinwei Li and Huan-ang Gao and Wenyi Li and Haohan Chi and Chenyu Liu and Chenxi Du and Yiqian Liu and Mingju Gao and Guiyu Zhang and Zongzheng Zhang and Li Yi and Yao Yao and Jingwei Zhao and Hongyang Li and Yikai Wang and Hao Zhao},
        year={2025},
        eprint={2503.20784},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2503.20784}, 
  }