A survey of graph neural networks in various learning paradigms: methods, applications, and challenges

L Waikhom, R Patgiri - Artificial Intelligence Review, 2023 - Springer
In the last decade, deep learning has reinvigorated the machine learning field. It has solved
many problems in computer vision, speech recognition, natural language processing, and …

Spanish pre-trained bert model and evaluation data

J Cañete, G Chaperon, R Fuentes, JH Ho… - arXiv preprint arXiv …, 2023 - arxiv.org
The Spanish language is one of the top 5 spoken languages in the world. Nevertheless,
finding resources to train or evaluate Spanish language models is not an easy task. In this …

A transformer-based approach for source code summarization

WU Ahmad, S Chakraborty, B Ray… - arXiv preprint arXiv …, 2020 - arxiv.org
Generating a readable summary that describes the functionality of a program is known as
source code summarization. In this task, learning code representation by modeling the …

Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT

S Wu, M Dredze - arXiv preprint arXiv:1904.09077, 2019 - arxiv.org
Pretrained contextual representation models (Peters et al., 2018; Devlin et al., 2018) have
pushed forward the state-of-the-art on many NLP tasks. A new release of BERT (Devlin …

75 languages, 1 model: Parsing universal dependencies universally

D Kondratyuk, M Straka - arXiv preprint arXiv:1904.02099, 2019 - arxiv.org
We present UDify, a multilingual multi-task model capable of accurately predicting universal
part-of-speech, morphological features, lemmas, and dependency trees simultaneously for …

Choosing transfer languages for cross-lingual learning

YH Lin, CY Chen, J Lee, Z Li, Y Zhang, M Xia… - arXiv preprint arXiv …, 2019 - arxiv.org
Cross-lingual transfer, where a high-resource transfer language is used to improve the
accuracy of a low-resource task language, is now an invaluable tool for improving …

COLD: A benchmark for Chinese offensive language detection

J Deng, J Zhou, H Sun, C Zheng, F Mi, H Meng… - arXiv preprint arXiv …, 2022 - arxiv.org
Offensive language detection is increasingly crucial for maintaining a civilized social media
platform and deploying pre-trained language models. However, this task in Chinese is still …

Multilingual generative language models for zero-shot cross-lingual event argument extraction

KH Huang, I Hsu, P Natarajan, KW Chang… - arXiv preprint arXiv …, 2022 - arxiv.org
We present a study on leveraging multilingual pre-trained generative language models for
zero-shot cross-lingual event argument extraction (EAE). By formulating EAE as a language …

Cognitive overload: Jailbreaking large language models with overloaded logical thinking

N Xu, F Wang, B Zhou, BZ Li, C Xiao… - arXiv preprint arXiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated increasing power, they have also
given rise to a wide range of harmful behaviors. As representatives, jailbreak attacks can …

Cross-lingual BERT transformation for zero-shot dependency parsing

Y Wang, W Che, J Guo, Y Liu, T Liu - arXiv preprint arXiv:1909.06775, 2019 - arxiv.org
This paper investigates the problem of learning cross-lingual representations in a contextual
space. We propose Cross-Lingual BERT Transformation (CLBT), a simple and efficient …