Accounting for agreement phenomena in sentence comprehension with transformer language models: Effects of similarity-based interference on surprisal and attention SH Ryu, RL Lewis arXiv preprint arXiv:2104.12874, 2021 | 26 | 2021 |
A Large-scale Comprehensive Abusiveness Detection Dataset with Multifaceted Labels from Reddit H Song*, SH Ryu*, H Lee, JC Park Proceedings of the 25th Conference on Computational Natural Language …, 2021 | 9 | 2021 |
Using Transformer Language Models to Integrate Surprisal, Entropy, and Working Memory Retrieval Accounts of Sentence Processing SH Ryu, R Lewis 35th Annual Conference on Human Sentence Processing, 2022 | 2 | 2022 |
Plausibility Processing in Transformer Language Models: Focusing on the Role of Attention Heads in GPT SH Ryu arXiv preprint arXiv:2310.13824, 2023 | | 2023 |
Flexible Acceptance Condition of Generics from a Probabilistic Viewpoint: Towards Formalization of the Semantics of Generics SH Ryu, W Yang, JC Park Journal of Psycholinguistic Research 51 (6), 1209-1229, 2022 | | 2022 |
On the interaction between dependency frequency and semantic fit in sentence processing SH Ryu, RP Chaves Society for Computation in Linguistics 2 (1), 2019 | | 2019 |
A visual half-field study for meaning process of Korean ambiguous eojeol S Kim, SH Ryu, K Nam 日本認知心理学会発表論文集 日本認知心理学会第 15 回大会, 5, 2017 | | 2017 |