© 2025 Visual AI Lab, HKU & Baidu VIS. All rights reserved.
Recent advances in scene understanding have leveraged multimodal large language models (MLLMs) for 3D reasoning by capitalizing on their strong 2D pretraining. However, the lack of explicit 3D data during MLLM pretraining limits 3D representation capability. In this paper, we investigate the 3D-awareness of MLLMs by evaluating multi-view correspondence and reveal a strong positive correlation between the quality of 3D-aware representation and downstream task performance. Motivated by this, we propose 3DRS, a framework that enhances MLLM 3D Representation learning through Supervision from pretrained 3D foundation models. Our approach aligns MLLM visual features with rich 3D knowledge distilled from 3D models, effectively improving scene understanding. Extensive experiments across multiple benchmarks and MLLMs—including visual grounding, captioning, and question answering—demonstrate consistent performance gains.
How important is 3D-aware feature learning for MLLMs? We conduct a systematic analysis on three representative MLLMs and observe a strong positive correlation between 3D correspondence scores (measuring 3D feature consistency across multi-view images) and downstream scene understanding performance. This motivates explicit 3D-aware supervision for MLLMs.
Performance consistently improves as 3D correspondence score increases.
3DRS achieves new state-of-the-art performance on a wide range of 3D scene understanding tasks, significantly outperforming both specialist and generalist baselines.
Comparison with state-of-the-art on ScanRefer, Multi3DRefer, Scan2Cap, ScanQA, SQA3D.
3DRS delivers more accurate grounding, richer captions, and more reliable answers.
Qualitative Visualization: Performance comparison in grounding, captioning, and question answering tasks.
@inproceedings{huang2025,
title={MLLMs Need 3D-Aware Representation Supervision for Scene Understanding},
author={Xiaohu Huang and Jingjing Wu and Qunyi Xie and Kai Han},
booktitle={Arxiv},
year={2025}
}