Fine-tuned DistilRoBERTa-base for NSFW Classification


Model Description

DistilBERT is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW).
The model is a fine-tuned version of DistilBERT.
It was fine-tuned on 14317 Reddit posts pulled from the (Reddit API) [https://praw.readthedocs.io/en/stable/].


How to Use

from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="michellejieli/NSFW_text_classification")
classifier("I see you’ve set aside this special time to humiliate yourself in public.")

Output:
[{'label': 'NSFW', 'score': 0.998853325843811}]


Contact

Please reach out to michelle.li851@duke.edu if you have any questions or feedback.


数据统计

数据评估

michellejieli/NSFW_text_classifier浏览人数已经达到20,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:michellejieli/NSFW_text_classifier的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找michellejieli/NSFW_text_classifier的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于michellejieli/NSFW_text_classifier特别声明

本站OpenI提供的michellejieli/NSFW_text_classifier都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由OpenI实际控制,在2023年 5月 26日 下午6:06收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,OpenI不承担任何责任。

相关导航

暂无评论

暂无评论...