… To show the generality of our approach we present tests conducted also for other pretrained transformers in Section 6.5: a version of BERT that was more effectively pre-trained, ie, …
… Historically, document expansion techniques have not been as popular as query expansion … interest in document expansion in the context of transformers, which we cover in Chapter 4. …
D Rau, J Kamps - European Conference on Information Retrieval, 2022 - Springer
… In recent years, large pre-trainedtransformers have led to substantial gains in performance over traditional retrieval … Does CE better rank the same documentsretrieved by BM25? …
… [16] use pretrained contextual embeddings, without fine-tuning, in downstream ranking … In this work we proposed a solution to apply Transformers to full document re-ranking. Our TKL …
X Zhang, F Wei, M Zhou - arXiv preprint arXiv:1905.06566, 2019 - arxiv.org
… from Transformers. We design an unsupervised method to pre-train HIBERT for document modeling. We apply the pre-trained HIBERT to the task of document summarization and …
… , we pre-train the Transformer model towards the supervised … At the finetuning stage, we use our pre-trained Text Encoder … for effective web-documentretrieval. Applied Intelligence 18, 3 …
… We consider the large-scale query-documentretrieval problem: given a query (eg, a … Transformer model, we compare the pretraining tasks to two baselines: No Pretraining and MLM. No …
… answer to a given question in relevant documents, eg, retrieved by a search engine. The latter finds an exact text span in a document (or typically a paragraph) containing the answer. …
… Specifically, unlike the in-batch negative documents used by the MedCPT retriever that … documents are sampled from rank e to rank f in the top retrieveddocuments by the pre-trained …