IndoLEM is a comprehensive Indonesian NLU benchmark, comprising three pillars NLP task: morpho-syntax, semantic, and discourse. Presented in COLING 2020.
Fajri Koto, Afshin, Rahimi, Jey Han Lau, and Timothy Baldwin. IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP. In Proceedings of the 28th COLING, December 2020.
IndoBERT is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources:
We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being 3.97 (similar to English BERT-base).
Load model and tokenizer (tested with transformers==3.5.1)
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("indolem/indobert-base-uncased")
model = AutoModel.from_pretrained("indolem/indobert-base-uncased")
IndoLEM (“Indonesian Language Evaluation Montage”) is a comprehensive Indonesian benchmark that comprises of seven tasks for the Indonesian language. This benchmark is categorized into three pillars of NLP tasks: morpho-syntax, semantics, and discourse.
We provide README file for each task. To find further information regarding each task, please click the related repository.
Experimental result over IndoLEM using mBERT, malayBERT and our IndoBERT:
Task | Metric | Bi-LSTM | mBERT | MalayBERT | IndoBERT |
---|---|---|---|---|---|
POS Tagging | Acc | 95.4 | 96.8 | 96.8 | 96.8 |
NER UGM | F1 | 70.9 | 71.6 | 73.2 | 74.9 |
NER UI | F1 | 82.2 | 82.2 | 87.4 | 90.1 |
Dep. Parsing (GSD) | UAS/LAS | 85.25/80.35 | 86.85/81.78 | 86.99/81.87 | 87.12/82.32 |
Dep. Parsing (PUD) | UAS/LAS | 84.04/79.01 | 90.58/85.44 | 88.91/83.56 | 89.23/83.95 |
Sentiment Analysis | F1 | 71.62 | 76.58 | 82.02 | 84.13 |
IndoSum | R1/RL | 67.96/67.24 | 68.40/67.67 | 68.44/67.71 | 69.93/69.21 |
Liputan6 (Sum) | R1/RL | 36.10/33.56 | 39.81/37.02 | --/-- | 41.08/38.01 |
Next Tweet Prediction | Acc | 73.6 | 92.4 | 93.1 | 93.7 |
Tweet Ordering | Corr (ρ) | 0.45 | 0.53 | 0.51 | 0.59 |