Human evaluation of English–Irish transformer-based NMT

S Lankford, H Afli, A Way - Information, 2022 - mdpi.com
Information, 2022mdpi.com
In this study, a human evaluation is carried out on how hyperparameter settings impact the
quality of Transformer-based Neural Machine Translation (NMT) for the low-resourced
English–Irish pair. SentencePiece models using both Byte Pair Encoding (BPE) and
unigram approaches were appraised. Variations in model architectures included modifying
the number of layers, evaluating the optimal number of heads for attention and testing
various regularisation techniques. The greatest performance improvement was recorded for …
In this study, a human evaluation is carried out on how hyperparameter settings impact the quality of Transformer-based Neural Machine Translation (NMT) for the low-resourced English–Irish pair. SentencePiece models using both Byte Pair Encoding (BPE) and unigram approaches were appraised. Variations in model architectures included modifying the number of layers, evaluating the optimal number of heads for attention and testing various regularisation techniques. The greatest performance improvement was recorded for a Transformer-optimized model with a 16k BPE subword model. Compared with a baseline Recurrent Neural Network (RNN) model, a Transformer-optimized model demonstrated a BLEU score improvement of 7.8 points. When benchmarked against Google Translate, our translation engines demonstrated significant improvements. Furthermore, a quantitative fine-grained manual evaluation was conducted which compared the performance of machine translation systems. Using the Multidimensional Quality Metrics (MQM) error taxonomy, a human evaluation of the error types generated by an RNN-based system and a Transformer-based system was explored. Our findings show the best-performing Transformer system significantly reduces both accuracy and fluency errors when compared with an RNN-based model.
MDPI
以上显示的是最相近的搜索结果。 查看全部搜索结果