T Ge, J Hu, L Wang, X Wang, SQ Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
We propose the In-context Autoencoder (ICAE), leveraging the power of a large language
model (LLM) to compress a long context into short compact memory slots that can be directly …