Autoencoders as cross-modal teachers: Can pretrained 2d image transformers help 3d representation learning?

R Dong, Z Qi, L Zhang, J Zhang, J Sun, Z Ge… - arXiv preprint arXiv …, 2022 - arxiv.org
arXiv preprint arXiv:2212.08320, 2022arxiv.org
The success of deep learning heavily relies on large-scale data with comprehensive labels,
which is more expensive and time-consuming to fetch in 3D compared to 2D images or
natural languages. This promotes the potential of utilizing models pretrained with data more
than 3D as teachers for cross-modal knowledge transferring. In this paper, we revisit masked
modeling in a unified fashion of knowledge distillation, and we show that foundational
Transformers pretrained with 2D images or natural languages can help self-supervised 3D …
The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages. This promotes the potential of utilizing models pretrained with data more than 3D as teachers for cross-modal knowledge transferring. In this paper, we revisit masked modeling in a unified fashion of knowledge distillation, and we show that foundational Transformers pretrained with 2D images or natural languages can help self-supervised 3D representation learning through training Autoencoders as Cross-Modal Teachers (ACT). The pretrained Transformers are transferred as cross-modal 3D teachers using discrete variational autoencoding self-supervision, during which the Transformers are frozen with prompt tuning for better knowledge inheritance. The latent features encoded by the 3D teachers are used as the target of masked point modeling, wherein the dark knowledge is distilled to the 3D Transformer students as foundational geometry understanding. Our ACT pretrained 3D learner achieves state-of-the-art generalization capacity across various downstream benchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes have been released at https://github.com/RunpeiDong/ACT.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果