SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval
paper available at https://arxiv.org/pdf/2207.02578
code available at https://github.com/microsoft/unilm/tree/master/simlm
Paper abstract
In this paper, we propose SimLM (Similarity matching with Language Model pre-training), a simple yet effective pre-training method for dense passage retrieval.
It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.
We use a replaced language modeling objective, which is inspired by ELECTRA,
to improve the sample efficiency and reduce the mismatch of the input distribution between pre-training and fine-tuning.
SimLM only requires access to unlabeled corpus, and is more broadly applicable when there are no labeled data or queries.
We conduct experiments on several large-scale passage retrieval datasets, and show substantial improvements over strong baselines under various settings.
Remarkably, SimLM even outperforms multi-vector approaches such as ColBERTv2 which incurs significantly more storage cost.
Results on MS-MARCO passage ranking task
Model | dev MRR@10 | dev R@50 | dev R@1k | TREC DL 2019 nDCG@10 | TREC DL 2020 nDCG@10 |
---|---|---|---|---|---|
RocketQAv2 | 38.8 | 86.2 | 98.1 | – | – |
coCondenser | 38.2 | 86.5 | 98.4 | 71.7 | 68.4 |
ColBERTv2 | 39.7 | 86.8 | 98.4 | – | – |
SimLM (this model) | 41.1 | 87.8 | 98.7 | 71.4 | 69.7 |
数据统计
数据评估
本站OpenI提供的intfloat/simlm-base-msmarco-finetuned都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由OpenI实际控制,在2023年 7月 26日 下午3:56收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,OpenI不承担任何责任。