BECEL: Benchmark for consistency evaluation of language models

M Jang, DS Kwon, T Lukasiewicz - Proceedings of the 29th …, 2022 - aclanthology.org
Proceedings of the 29th International Conference on Computational …, 2022aclanthology.org
Behavioural consistency is a critical condition for a language model (LM) to become
trustworthy like humans. Despite its importance, however, there is little consensus on the
definition of LM consistency, resulting in different definitions across many studies. In this
paper, we first propose the idea of LM consistency based on behavioural consistency and
establish a taxonomy that classifies previously studied consistencies into several sub-
categories. Next, we create a new benchmark that allows us to evaluate a model on 19 test …
Abstract
Behavioural consistency is a critical condition for a language model (LM) to become trustworthy like humans. Despite its importance, however, there is little consensus on the definition of LM consistency, resulting in different definitions across many studies. In this paper, we first propose the idea of LM consistency based on behavioural consistency and establish a taxonomy that classifies previously studied consistencies into several sub-categories. Next, we create a new benchmark that allows us to evaluate a model on 19 test cases, distinguished by multiple types of consistency and diverse downstream tasks. Through extensive experiments on the new benchmark, we ascertain that none of the modern pre-trained language models (PLMs) performs well in every test case, while exhibiting high inconsistency in many cases. Our experimental results suggest that a unified benchmark that covers broad aspects (ie, multiple consistency types and tasks) is essential for a more precise evaluation.
aclanthology.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References