Grammatical sequence prediction for real-time neural semantic parsing

C Xiao, C Teichmann, K Arkoudas - arXiv preprint arXiv:1907.11049, 2019 - arxiv.org
C Xiao, C Teichmann, K Arkoudas
arXiv preprint arXiv:1907.11049, 2019arxiv.org
While sequence-to-sequence (seq2seq) models achieve state-of-the-art performance in
many natural language processing tasks, they can be too slow for real-time applications.
One performance bottleneck is predicting the most likely next token over a large vocabulary;
methods to circumvent this bottleneck are a current research topic. We focus specifically on
using seq2seq models for semantic parsing, where we observe that grammars often exist
which specify valid formal representations of utterance semantics. By developing a generic …
While sequence-to-sequence (seq2seq) models achieve state-of-the-art performance in many natural language processing tasks, they can be too slow for real-time applications. One performance bottleneck is predicting the most likely next token over a large vocabulary; methods to circumvent this bottleneck are a current research topic. We focus specifically on using seq2seq models for semantic parsing, where we observe that grammars often exist which specify valid formal representations of utterance semantics. By developing a generic approach for restricting the predictions of a seq2seq model to grammatically permissible continuations, we arrive at a widely applicable technique for speeding up semantic parsing. The technique leads to a 74% speed-up on an in-house dataset with a large vocabulary, compared to the same neural model without grammatical restrictions.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果