Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining Z Qi, R Dong, G Fan, Z Ge, X Zhang, K Ma, L Yi ICML 2023, 2023 | 69 | 2023 |
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning? R Dong, Z Qi, L Zhang, J Zhang, J Sun, Z Ge, L Yi, K Ma ICLR 2023, 2022 | 60 | 2022 |
Dreamllm: Synergistic multimodal comprehension and creation R Dong, C Han, Y Peng, Z Qi, Z Ge, J Yang, L Zhao, J Sun, H Zhou, H Wei, ... ICLR 2024 (Spotlight), 2023 | 58 | 2023 |
Shapellm: Universal 3d object understanding for embodied interaction Z Qi, R Dong, S Zhang, H Geng, C Han, Z Ge, L Yi, K Ma ECCV 2024, 2024 | 12 | 2024 |
Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, and Li Yi. Dreamllm: Synergistic multimodal comprehension and creation R Dong, C Han, Y Peng, Z Qi, Z Ge, J Yang, L Zhao, J Sun, H Zhou, H Wei arXiv preprint arXiv:2309.11499 3, 2023 | 9 | 2023 |
VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation Z Qi, M Yu, R Dong, K Ma NeurIPS 2023, 2023 | 7 | 2023 |
Point-gcc: Universal self-supervised 3d scene pre-training via geometry-color contrast G Fan, Z Qi, W Shi, K Ma arXiv preprint arXiv:2305.19623, 2023 | 5 | 2023 |
DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation Y Peng, Y Cui, H Tang, Z Qi, R Dong, J Bai, C Han, Z Ge, X Zhang, ST Xia arXiv preprint arXiv:2406.16855, 2024 | | 2024 |