A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt

Y Cao, S Li, Y Liu, Z Yan, Y Dai, PS Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
Recently, ChatGPT, along with DALL-E-2 and Codex, has been gaining significant attention
from society. As a result, many individuals have become interested in related resources and …

A comprehensive survey on pretrained foundation models: A history from bert to chatgpt

C Zhou, Q Li, C Li, J Yu, Y Liu, G Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks with different data modalities. A PFM (eg, BERT, ChatGPT, and GPT-4) is …

A survey on in-context learning

Q Dong, L Li, D Dai, C Zheng, Z Wu, B Chang… - arXiv preprint arXiv …, 2022 - arxiv.org
With the increasing ability of large language models (LLMs), in-context learning (ICL) has
become a new paradigm for natural language processing (NLP), where LLMs make …

Deep bidirectional language-knowledge graph pretraining

M Yasunaga, A Bosselut, H Ren… - Advances in …, 2022 - proceedings.neurips.cc
Pretraining a language model (LM) on text has been shown to help various downstream
NLP tasks. Recent works show that a knowledge graph (KG) can complement text data …

[HTML][HTML] Pre-trained language models and their applications

H Wang, J Li, H Wu, E Hovy, Y Sun - Engineering, 2023 - Elsevier
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …

Multimodal foundation models: From specialists to general-purpose assistants

C Li, Z Gan, Z Yang, J Yang, L Li… - … and Trends® in …, 2024 - nowpublishers.com
Neural compression is the application of neural networks and other machine learning
methods to data compression. Recent advances in statistical machine learning have opened …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Evaluating the ripple effects of knowledge editing in language models

R Cohen, E Biran, O Yoran, A Globerson… - Transactions of the …, 2024 - direct.mit.edu
Modern language models capture a large body of factual knowledge. However, some facts
can be incorrectly induced or become obsolete over time, resulting in factually incorrect …

Large-scale multi-modal pre-trained models: A comprehensive survey

X Wang, G Chen, G Qian, P Gao, XY Wei… - Machine Intelligence …, 2023 - Springer
With the urgent demand for generalized deep models, many pre-trained big models are
proposed, such as bidirectional encoder representations (BERT), vision transformer (ViT) …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …