Motivation

This model is based on anferico/bert-for-patents – a BERTLARGE model (See next section for details below). By default, the pre-trained model’s output embeddings with size 768 (base-models) or with size 1024 (large-models). However, when you store Millions of embeddings, this can require quite a lot of memory/storage. So have reduced the embedding dimension to 64 i.e 1/16th of 1024 using Principle Component Analysis (PCA) and it still gives a comparable performance. Yes! PCA gives better performance than NMF. Note: This process neither improves the runtime, nor the memory requirement for running the model. It only reduces the needed space to store embeddings, for example, for semantic search using vector databases.


BERT for Patents

BERT for Patents is a model trained by Google on 100M+ patents (not just US patents).
If you want to learn more about the model, check out the blog post, white paper and GitHub page containing the original TensorFlow checkpoint.



Projects using this model (or variants of it):

  • Patents4IPPC (carried out by Pi School and commissioned by the Joint Research Centre (JRC) of the European Commission)

数据统计

数据评估

prithivida/bert-for-patents-64d浏览人数已经达到15,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:prithivida/bert-for-patents-64d的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找prithivida/bert-for-patents-64d的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于prithivida/bert-for-patents-64d特别声明

本站OpenI提供的prithivida/bert-for-patents-64d都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由OpenI实际控制,在2023年 5月 26日 下午6:01收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,OpenI不承担任何责任。

相关导航

暂无评论

暂无评论...