Measuring bias in contextualized word representations K Kurita, N Vyas, A Pareek, AW Black, Y Tsvetkov arXiv preprint arXiv:1906.07337, 2019 | 457 | 2019 |
Weight poisoning attacks on pre-trained models K Kurita, P Michel, G Neubig arXiv preprint arXiv:2004.06660, 2020 | 382 | 2020 |
Towards robust toxic content classification K Kurita, A Belova, A Anastasopoulos arXiv preprint arXiv:1912.06872, 2019 | 36 | 2019 |
Quantifying social biases in contextual word representations K Kurita, N Vyas, A Pareek, AW Black, Y Tsvetkov 1st ACL Workshop on Gender Bias for Natural Language Processing, 2019 | 30 | 2019 |
Black, and Yulia Tsvetkov (2019), Measuring bias in contextualized word representations K Kurita, N Vyas, A Pareek, W Alan Proceedings of the First Workshop on Gender Bias in Natural Language …, 0 | 8 | |
An Overview of Normalization Methods in Deep Learning K Kurita Machine Learning Explained 30, 2018 | 5 | 2018 |