deepset/gbert-base-germandpr-question_encoder


Overview

Language model: gbert-base-germandpr
Language: German
Training data: GermanDPR train set (~ 56MB)
Eval data: GermanDPR test set (~ 6MB)
Infrastructure: 4x V100 GPU
Published: Apr 26th, 2021


Details

  • We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
  • The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.
  • It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
    For each pair, there are one positive context and three hard negative contexts.
  • As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
  • The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.

See https://deepset.ai/germanquad for more details and dataset download.


Hyperparameters

batch_size = 40
n_epochs = 20
num_training_steps = 4640
num_warmup_steps = 460
max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder
learning_rate = 1e-6
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
num_hard_negatives = 2


Performance

During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
The dev split contained 1030 question/answer pairs.
Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
After fixing the hyperparameters we trained the model on the full GermanDPR train set.
We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.

deepset/gbert-base-germandpr-question_encoder


Usage


In haystack

You can load the model in haystack as a retriever for doing QA at scale:
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="deepset/gbert-base-germandpr-question_encoder"
passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder"
)


Authors

  • Timo Möller: timo.moeller [at] deepset.ai
  • Julian Risch: julian.risch [at] deepset.ai
  • Malte Pietsch: malte.pietsch [at] deepset.ai


About us

deepset/gbert-base-germandpr-question_encoder
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:

  • German BERT (aka “bert-base-german-cased”)
  • GermanQuAD and GermanDPR datasets and models (aka “gelectra-base-germanquad”, “gbert-base-germandpr”)
  • FARM
  • Haystack

Get in touch:
Twitter | LinkedIn | Website
By the way: we’re hiring!

数据统计

数据评估

deepset/gbert-base-germandpr-question_encoder浏览人数已经达到11,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:deepset/gbert-base-germandpr-question_encoder的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找deepset/gbert-base-germandpr-question_encoder的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于deepset/gbert-base-germandpr-question_encoder特别声明

本站OpenI提供的deepset/gbert-base-germandpr-question_encoder都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由OpenI实际控制,在2023年 6月 6日 下午2:57收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,OpenI不承担任何责任。

相关导航

暂无评论

暂无评论...