PARADE: Passage Representation Aggregation forDocument Reranking

C Li, A Yates, S MacAvaney, B He, Y Sun - ACM Transactions on …, 2023 - dl.acm.org
Pre-trained transformer models, such as BERT and T5, have shown to be highly effective at
ad hoc passage and document ranking. Due to the inherent sequence length limits of these …

[PDF][PDF] PARADE: Passage Representation Aggregation for Document Reranking

C Li, A Yates, S MacAvaney, B He… - arXiv preprint arXiv …, 2020 - pure.mpg.de
We present PARADE, an end-to-end Transformer-based model that considers document-
level context for document reranking. PARADE leverages passage-level relevance …

PARADE: Passage Representation Aggregation for Document Reranking

C Li, A Yates, S MacAvaney, B He, Y Sun - arXiv e-prints, 2020 - ui.adsabs.harvard.edu
Pretrained transformer models, such as BERT and T5, have shown to be highly effective at
ad-hoc passage and document ranking. Due to inherent sequence length limits of these …

PARADE: passage representation aggregation for document reranking

C Li, A Yates, S MacAvaney, B He… - ACM Transactions on …, 2023 - eprints.gla.ac.uk
Pre-trained transformer models, such as BERT and T5, have shown to be highly effective at
ad-hoc passage and document ranking. Due to the inherent sequence length limits of these …

PARADE: Passage Representation Aggregation for Document Reranking

C Li, A Yates, S MacAvaney, B He, Y Sun - arXiv preprint arXiv …, 2020 - arxiv.org
Pretrained transformer models, such as BERT and T5, have shown to be highly effective at
ad-hoc passage and document ranking. Due to inherent sequence length limits of these …

[PDF][PDF] PARADE: Passage Representation Aggregation for Document Reranking

C Li - pure.mpg.de
We present PARADE, an end-to-end Transformer-based model that considers document-
level context for document reranking. PARADE leverages passage-level relevance …