3D Spatial Understanding in MLLMs: Disambiguation and Evaluation
3D Spatial Understanding in MLLMs: Disambiguation and Evaluation
In: Proc. of. IEEE International Conference on Robotics and Automation (ICRA-2025), IEEE, 2025.
- Abstract:
- Multimodal Large Language Models (MLLMs) have made significant progress in tasks such as image captioning and question answering. However, while these models can generate realistic captions, they often struggle with providing precise instructions, particularly when it comes to localizing and disambiguating objects in complex 3D environments. This capability is critical as MLLMs become more integrated with collaborative robotic systems. In scenarios where a target object is surrounded by similar objects (distractors), robots must deliver clear, spatially-aware instructions to guide humans effectively. We refer to this challenge as contextual object localization and disambiguation, which imposes stricter constraints than conventional 3D dense captioning, especially regarding ensuring target exclusivity. In response, we propose simple yet effective techniques to enhance the model’s ability to localize and disambiguate target objects. Our approach not only achieves stateof-the-art performance on conventional metrics that evaluate sentence similarity, but also demonstrates improved 3D spatial understanding through 3D visual grounding model. The code will be released upon acceptance of our work.