qandos0/SentimentArEng

text classificationtransformersarentransformerssafetensorsxlm-robertatext-classificationaren
920.8K

SentimentArEng

This model is a fine-tuned version of cardiffnlp/twitter-xlm-roberta-base-sentiment on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.502831
  • Accuracy: 0.798512

inference with pipeline

from transformers import pipeline
model_path = "Noor0/SentimentArEng"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("تعامل الموظفين كان أقل من المتوقع")

  • output:
  • [{'label': 'negative', 'score': 0.9905518293380737}]

Training and evaluation data

  • Training set: 114,885 records
  • evaluation data: 12,765 records

Training procedure

Training LossEpochValidation LossAccuracy
0.45112.00.5028310.7985
0.36553.00.5761180.7954
0.30194.00.6253910.7985
0.24665.00.8356890.7979

Training hyperparameters

  • The following hyperparameters were used during training:
    • learning_rate=2e-5
    • num_train_epochs=20
    • weight_decay=0.01
    • batch_size=16,

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.0.0
  • Datasets 2.11.0
  • Tokenizers 0.14.1
DEPLOY IN 60 SECONDS

Run SentimentArEng on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.