Grammar prompting for domain-specific language generation with large language models

B Wang, Z Wang, X Wang, Y Cao… - Advances in Neural …, 2024 - proceedings.neurips.cc
Large language models (LLMs) can learn to perform a wide range of natural language tasks
from just a handful of in-context examples. However, for generating strings from highly …

Discrete opinion tree induction for aspect-based sentiment analysis

C Chen, Z Teng, Z Wang, Y Zhang - … of the 60th Annual Meeting of …, 2022 - aclanthology.org
Dependency trees have been intensively used with graph neural networks for aspect-based
sentiment classification. Though being effective, such methods rely on external dependency …

Same pre-training loss, better downstream: Implicit bias matters for language models

H Liu, SM Xie, Z Li, T Ma - International Conference on …, 2023 - proceedings.mlr.press
Abstract Language modeling on large-scale datasets improves performance of various
downstream tasks. The validation pre-training loss is often used as the evaluation metric for …

Sequence-to-sequence learning with latent neural grammars

Y Kim - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
Sequence-to-sequence learning with neural networks has become the de facto standard for
sequence modeling. This approach typically models the local distribution over the next …

Nested named entity recognition as latent lexicalized constituency parsing

C Lou, S Yang, K Tu - arXiv preprint arXiv:2203.04665, 2022 - arxiv.org
Nested named entity recognition (NER) has been receiving increasing attention.
Recently,(Fu et al, 2021) adapt a span-based constituency parser to tackle nested NER …

Semantic role labeling as dependency parsing: Exploring latent tree structures inside arguments

Y Zhang, Q Xia, S Zhou, Y Jiang, G Fu… - arXiv preprint arXiv …, 2021 - arxiv.org
Semantic role labeling (SRL) is a fundamental yet challenging task in the NLP community.
Recent works of SRL mainly fall into two lines: 1) BIO-based; 2) span-based. Despite …

[PDF][PDF] Unsupervised vision-language grammar induction with shared structure modeling

B Wan, W Han, Z Zheng, T Tuytelaars - Proceedings ICLR 2022, 2022 - lirias.kuleuven.be
We introduce a new task, unsupervised vision-language (VL) grammar induction. Given an
image-caption pair, the goal is to extract a shared hierarchical structure for both image and …

Hierarchical phrase-based sequence-to-sequence learning

B Wang, I Titov, J Andreas, Y Kim - arXiv preprint arXiv:2211.07906, 2022 - arxiv.org
We describe a neural transducer that maintains the flexibility of standard sequence-to-
sequence (seq2seq) models while incorporating hierarchical phrases as a source of …

Cascading and direct approaches to unsupervised constituency parsing on spoken sentences

Y Tseng, CIJ Lai, H Lee - ICASSP 2023-2023 IEEE …, 2023 - ieeexplore.ieee.org
Past work on unsupervised parsing is constrained to written form. In this paper, we present
the first study on unsupervised spoken constituency parsing given unlabeled spoken …

Augmenting transformers with recursively composed multi-grained representations

X Hu, Q Zhu, K Tu, W Wu - arXiv preprint arXiv:2309.16319, 2023 - arxiv.org
We present ReCAT, a recursive composition augmented Transformer that is able to explicitly
model hierarchical syntactic structures of raw texts without relying on gold trees during both …