SciNCL
SciNCL is a pre-trained BERT language model to generate document-level embeddings of research papers.
It uses the citation graph neighborhood to generate samples for contrastive learning.
Prior to the contrastive training, the model is initialized with weights from scibert-scivocab-uncased.
The underlying citation embeddings are trained on the S2ORC citation graph.
Paper: Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings (EMNLP 2022 paper).
Code: https://github.com/malteos/scincl
PubMedNCL: Working with biomedical papers? Try PubMedNCL.
How to use the pretrained model
from transformers import AutoTokenizer, AutoModel
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('malteos/scincl')
model = AutoModel.from_pretrained('malteos/scincl')
papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'},
{'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}]
# concatenate title and abstract with [SEP] token
title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
# preprocess the input
inputs = tokenizer(title_abs, padding=True, truncation=True, return_tensors="pt", max_length=512)
# inference
result = model(**inputs)
# take the first token ([CLS] token) in the batch as the embedding
embeddings = result.last_hidden_state[:, 0, :]
Triplet Mining Parameters
Setting | Value |
---|---|
seed | 4 |
triples_per_query | 5 |
easy_positives_count | 5 |
easy_positives_strategy | 5 |
easy_positives_k | 20-25 |
easy_negatives_count | 3 |
easy_negatives_strategy | random_without_knn |
hard_negatives_count | 2 |
hard_negatives_strategy | knn |
hard_negatives_k | 3998-4000 |
数据统计
数据评估
本站OpenI提供的malteos/scincl都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由OpenI实际控制,在2023年 6月 6日 下午2:57收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,OpenI不承担任何责任。