IndoBERT Base Model (phase2 – uncased)
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
All Pre-trained Models
Model | #params | Arch. | Training data |
---|---|---|---|
indobenchmark/indobert-base-p1 | 124.5M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-base-p2 | 124.5M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-large-p1 | 335.2M | Large | Indo4B (23.43 GB of text) |
indobenchmark/indobert-large-p2 | 335.2M | Large | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-base-p1 | 11.7M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-base-p2 | 11.7M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-large-p1 | 17.7M | Large | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-large-p2 | 17.7M | Large | Indo4B (23.43 GB of text) |
数据统计
数据评估
关于indobenchmark/indobert-base-p2特别声明
本站OpenI提供的indobenchmark/indobert-base-p2都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由OpenI实际控制,在2023年 6月 20日 上午2:38收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,OpenI不承担任何责任。
相关导航
暂无评论...