Interpreting deep learning models in natural language processing: A review

X Sun, D Yang, X Li, T Zhang, Y Meng, H Qiu… - arXiv preprint arXiv …, 2021 - arxiv.org
Neural network models have achieved state-of-the-art performances in a wide range of
natural language processing (NLP) tasks. However, a long-standing criticism against neural …

Expressive text-to-image generation with rich text

S Ge, T Park, JY Zhu, JB Huang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Plain text has become a prevalent interface for text-to-image synthesis. However, its limited
customization options hinder users from accurately describing desired outputs. For example …

Text classification via large language models

X Sun, X Li, J Li, F Wu, S Guo, T Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Despite the remarkable success of large-scale Language Models (LLMs) such as GPT-3,
their performances still significantly underperform fine-tuned models in the task of text …

Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation

Y Shao, Z Geng, Y Liu, J Dai, H Yan, F Yang… - Science China …, 2024 - Springer
In this paper, we take the advantage of previous pre-trained models (PTMs) and propose a
novel Chinese pre-trained unbalanced transformer (CPT). Different from previous Chinese …

A long-text classification method of Chinese news based on BERT and CNN

X Chen, P Cong, S Lv - IEEE Access, 2022 - ieeexplore.ieee.org
Text Classification is an important research area in natural language processing (NLP) that
has received a considerable amount of scholarly attention in recent years. However, real …

A review of few-shot and zero-shot learning for node classification in social networks

J Chen, R Mi, H Wang, H Wu, J Mo… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Node classification tasks aim to assign labels or categories to entire graphs based on their
structural properties or node attributes. It can be adopted for various types of graph systems …

Pushing the limits of chatgpt on nlp tasks

X Sun, L Dong, X Li, Z Wan, S Wang, T Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Despite the success of ChatGPT, its performances on most NLP tasks are still well below the
supervised baselines. In this work, we looked into the causes, and discovered that its subpar …

Rocbert: Robust chinese bert with multimodal contrastive pretraining

H Su, W Shi, X Shen, Z Xiao, T Ji, J Fang… - Proceedings of the 60th …, 2022 - aclanthology.org
Large-scale pretrained language models have achieved SOTA results on NLP tasks.
However, they have been shown vulnerable to adversarial attacks especially for logographic …

Gnn-lm: Language modeling based on global contexts via gnn

Y Meng, S Zong, X Li, X Sun, T Zhang, F Wu… - arXiv preprint arXiv …, 2021 - arxiv.org
Inspired by the notion that``{\it to copy is easier than to memorize}``, in this work, we
introduce GNN-LM, which extends the vanilla neural language model (LM) by allowing to …

Zero-shot micro-video classification with neural variational inference in graph prototype network

J Chen, J Wang, Z Dai, H Wu, M Wang… - Proceedings of the 31st …, 2023 - dl.acm.org
Micro-video classification plays a central role in online content recommendation platforms,
such as Kwai and Tik-Tok. Existing works on video classification largely exploit the …