cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual

text classificationtransformerstransformerspytorchxlm-robertatext-classificationdataset:cardiffnlp/tweet_sentiment_multilingualmodel-index
63.3K

cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual

This model is a fine-tuned version of cardiffnlp/twitter-xlm-roberta-base on the cardiffnlp/tweet_sentiment_multilingual (all) via tweetnlp. Training split is train and parameters have been tuned on the validation split validation.

Following metrics are achieved on the test split test (link).

  • F1 (micro): 0.6931034482758621
  • F1 (macro): 0.692628774202147
  • Accuracy: 0.6931034482758621

Usage

Install tweetnlp via pip.

pip install tweetnlp

Load the model in python.

import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')

Reference

@inproceedings{camacho-collados-etal-2022-tweetnlp,
    title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
    author = "Camacho-collados, Jose  and
      Rezaee, Kiamehr  and
      Riahi, Talayeh  and
      Ushio, Asahi  and
      Loureiro, Daniel  and
      Antypas, Dimosthenis  and
      Boisson, Joanne  and
      Espinosa Anke, Luis  and
      Liu, Fangyu  and
      Mart{\'\i}nez C{\'a}mara, Eugenio" and others,
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.emnlp-demos.5",
    pages = "38--49"
}

DEPLOY IN 60 SECONDS

Run twitter-xlm-roberta-base-sentiment-multilingual on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.