MLLMs Need 3D-Aware Representation Supervision for Scene Understanding

1Visual AI Lab, The University of Hong Kong
2Baidu VIS
* Corresponding author

Comparison radar chart

Performance comparison across all benchmarks. Our method consistently improves different MLLMs.

3DRS introduces explicit 3D-aware representation supervision into Multimodal Large Language Models (MLLMs) using 3D foundation models, significantly improving scene understanding across a wide range of 3D tasks.

Abstract

Recent advances in scene understanding have leveraged multimodal large language models (MLLMs) for 3D reasoning by capitalizing on their strong 2D pretraining. However, the lack of explicit 3D data during MLLM pretraining limits 3D representation capability. In this paper, we investigate the 3D-awareness of MLLMs by evaluating multi-view correspondence and reveal a strong positive correlation between the quality of 3D-aware representation and downstream task performance. Motivated by this, we propose 3DRS, a framework that enhances MLLM 3D Representation learning through Supervision from pretrained 3D foundation models. Our approach aligns MLLM visual features with rich 3D knowledge distilled from 3D models, effectively improving scene understanding. Extensive experiments across multiple benchmarks and MLLMs—including visual grounding, captioning, and question answering—demonstrate consistent performance gains.

Motivation: 3D-Aware Feature Learning Matters!

How important is 3D-aware feature learning for MLLMs? We conduct a systematic analysis on three representative MLLMs and observe a strong positive correlation between 3D correspondence scores (measuring 3D feature consistency across multi-view images) and downstream scene understanding performance. This motivates explicit 3D-aware supervision for MLLMs.

Performance consistently improves as 3D correspondence score increases.

State-of-the-Art Results

3DRS achieves new state-of-the-art performance on a wide range of 3D scene understanding tasks, significantly outperforming both specialist and generalist baselines.

Comparison with state-of-the-art on ScanRefer, Multi3DRefer, Scan2Cap, ScanQA, SQA3D.

Qualitative Visualization

3DRS delivers more accurate grounding, richer captions, and more reliable answers.

Visualization Results

Qualitative Visualization: Performance comparison in grounding, captioning, and question answering tasks.

  • Visual Grounding: Our model's predictions match the ground truth more closely than baselines.
  • Object Captioning: Our captions are more accurate and fine-grained.
  • Question Answering: Our answers are better supported by visual evidence.

BibTeX

@inproceedings{huang2025,
  title={MLLMs Need 3D-Aware Representation Supervision for Scene Understanding},
  author={Xiaohu Huang and Jingjing Wu and Qunyi Xie and Kai Han},
  booktitle={Arxiv},
  year={2025}
}