Is ChatGPT A Good Translator Save

A preliminary evaluation of ChatGPT/GPT-4 for machine translation.

Project README
ParroT

Is ChatGPT A Good Translator?

We conduct a preliminary evaluation of ChatGPT/GPT-4 for machine translation. [V1] [arXiv]

This repository shows the main findings and releases the evaluated test sets as well as the translation outputs, for the replication of the study.

ChatGPT for Machine Translation

Test Data

Please kindly cite the papers of the data sources if you use any of them.

  • Flores: The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
  • WMT19 Biomedical: Findings of the WMT 2019 Biomedical Translation Shared Task: Evaluation for Medline Abstracts and Biomedical Terminologies
  • WMT20 Robustness: Findings of the WMT 2020 Shared Task on Machine Translation Robustness

Translation Prompts

We ask ChatGPT for advice to trigger the translation ability:

Templates-by-ChatGPT

Figure 1: Prompts advised by ChatGPT for machine translation (Date: 2022.12.16).

Summarized prompts:

  • Tp1: Translate these sentences from [SRC] to [TGT]:
  • Tp2: Answer with no quotes. What do these sentences mean in [TGT]?
  • Tp3: Please provide the [TGT] translation for these sentences: :white_check_mark:
image

Table 1: Comparison of different prompts for ChatGPT to perform Chinese-to-English (Zh⇒En) translation.

Multilingual Translation

We evaluate the translations between four languages, namely, German, English, Romanian and Chinese, considering both the resource and language family effects.

  • ChatGPT performs competitively with commercial translation products (e.g., Google Translate) on high-resource European languages but lags behind significantly on low-resource.
  • The gap between ChatGPT and the commercial systems becomes larger on distant languages than close languages.
image

Table 2: Performance of ChatGPT for multilingual translation.

Translation Robustness

We evaluate the translation robustness of ChatGPT on biomedical abstracts, reddit comments, and crowdsourced speeches.

  • ChatGPT does not perform as well as the commercial systems on biomedical abstracts or Reddit comments but exhibits good results on spoken language.
image

Table 3: Performance of ChatGPT for translation robustness.

Improving ChatGPT for MT

Pivot Prompting

For distant languages, we explore an interesting strategy named Pivot Prompting that asks ChatGPT to translate the source sentence into a high-resource pivot language before into the target language. Thus, we adjust the Tp3 prompt as below:

  • Tp3-pivot: Please provide the [PIV] translation first and then the [TGT] translation for these sentences one by one:
Pivot-Prompt

Figure 2: Translation results by ChatGPT with pivot prompting (Date: 2023.01.31).

image

Table 4: Performance of ChatGPT with pivot prompting. New results are obtained from the updated ChatGPT version on 2023.01.31. LR: length ratio.

GPT-4 as the Engine

We update the translation performance of GPT-4, which exhibits huge improvements over ChatGPT. Refer to [ParroT] for the COMET metric results.

Templates-by-ChatGPT

Table 5: Translation performance of GPT-4 (Date: 2023.03.15).

Extensive Analysis

Automatic Analysis

We analyze the translation outputs with compare-mt at both word level and sentence level.

  • ChatGPT performs the worst on low-frequency words, which is then fixed by GPT-4.
  • ChatGPT performs the worst on short sentences, which we attribute to the observations that ChatGPT translates famous terminologies into full names rather than abbreviations in references.
auto auto

Table 6-7: Automatic analysis: (a) F-measure of target word prediction w.r.t. frequency. (b) BLEU score w.r.t. length bucket of target sentences.

Human Analysis

We ask three annotators to identify the errors in the translation outputs, including under-translation, over-translation, and mis-translation. Based on the translation errors, the annotators rank the translation outputs of Google, ChatGPT and GPT-4 accordingly, with 1 as the best system and 3 as the worst.

  • ChatGPT makes more over-translation errors and mis-translation errors than Google Translate, tending to generate hallucinations.
  • GPT-4 makes the least errors and is ranked 1st though its BLEU score is lower than that of Google Translate.
auto auto

Table 8-9: Human analysis: (a) Number of translation errors annotated by human. (b) Human rankings of the translation outputs.

Case Study

A few translation outputs:

  1. ChatGPT hallucinates at the first few tokens and also mis-translates "过量降水".
  2. Both ChatGPT and GPT-4 translate "广泛耐药结核病" into the full name while the reference and Google Translate do not.
  3. GPT-4 can translate the terminology "美国公共广播公司" into the abbreviation as the reference.
  4. GPT-4 translates the terminology "狼孩" more properly based on the context while Google Translate and ChatGPT cannot.
Cases

Table 10: Examples from Flores Zh⇒En test set.

Limitations

We should admit that the report is far from complete with various aspects to make it more reliable in the future:

  • Coverage of Test Data: Currently, we randomly select 50 samples from each test set for evaluation due to the response delay of ChatGPT. While there are some projects in GitHub trying to automate the access process, they are vulnerable to browser refreshes or network issues. The official API by OpenAI in the future may be a better choice. Let’s just wait for a moment.
  • Reproducibility Issue: By querying ChatGPT multiple times, we find that the results of the same query may vary across multiple trials, which brings randomness to the evaluation results. For more reliable results, it is best to repeat the translation multiple times for each test set and report the average result.
  • Translation Abilities: We only focus on multilingual translation and translation robustness in this report. However, there are some other translation abilities that can be further evaluated, e.g., constrained machine translation and document-level machine translation.

Public Impact

Star History Chart

Community

Citation

Please kindly cite our report if you find it helpful:

@inproceedings{jiao2023ischatgpt,
  title={Is ChatGPT A Good Translator? A Preliminary Study},
  author={Wenxiang Jiao and Wenxuan Wang and Jen-tse Huang and Xing Wang and Shuming Shi and Zhaopeng Tu},
  booktitle = {ArXiv},
  year      = {2023}
}
Open Source Agenda is not affiliated with "Is ChatGPT A Good Translator" Project. README Source: wxjiao/Is-ChatGPT-A-Good-Translator
Stars
230
Open Issues
0
Last Commit
7 months ago

Open Source Agenda Badge

Open Source Agenda Rating