In this tutorial, I am going to explain how I compute the BLEU score for the Machine Translation output using Python.
BLEU is simply a measure for evaluating the quality of your Machine Translation system. It does not really matter whether your MT target is from a high-level framework like OpenNMT or Marian, or from a lower-level one like TensorFlow or PyTorch. It does not also matter whether it is a Neural Machine Translation system or a Statistical Machine Translation tool like Moses.
So let’s see the steps I follow to calculate the BLEU score.
Files Required to Compute BLEU
To measure BLEU, you need to have two files:
1- Reference: It is the human translation (target) file of your test dataset.
2- System: It is the MTed translation, generated by the machine translation model for the source of the same test dataset used for “Reference”.
Detokenization & BLEU Calculation
Code of MT BLEU Calculator
File Names as Arguments
In the above script, file names are hardcoded. You can easily add the file names as arguments. To let the Python script understand the arguments, you will need first to
import sys and then create two variables one for the test dataset, e.g.
target_test, with the value
sys.argv for the test file argument and one for the MT output, e.g.
target_pred, with the value
sys.argv for the MTed file argument. Finally, instead of hardcoding the test dataset name and the MTed file name, you can use these two variables.
As you can see in the Python script below, I used
argv which is a list including the arguments given in the command line; the first item
 is saved for the Python script file name. So to run this script, you can use a similar command line in your CMD or Terminal:
python3 bleu-scrip.py test.txt mt.txt
Here is the BLEU script, but now with arguments.
Sentence BLEU Calculator
The previous code computes BLEU for the whole test dataset, and this is the common practice. Still, you might want to calculate BLEU for segment by segment. The following code use the function
sentence_bleu() for the sacreBLEU library for achieving this task using a for loop. Finally, it saves the output, i.e. the BLEU score for each sentence in a new line, into a file called “bleu.txt”.
As we did with the corpus BLEU script, here is the sentence BLEU script, but now with arguments.
One of the popular scripts to calculate BLEU is multi-bleu.perl . It works very similar to sacreBLEU. The results might be slightly different though; for example in one of my tests, the score reported by sacreBLEU was 48.23 while the BLEU score reported by multi-bleu.perl was 48.57.
To use multi-bleu.perl, you can simply run this command line in your Terminal.
perl multi-bleu.perl human-translation.txt < mt-pred.txt
Final Note: Is BLEU Accurate?
Well, BLEU simply compares the human translation to the machine translation. It does not take into consideration synonyms or accepted word order changes.
Here is an example of the original translation in the corpus:
FR: Notre ONU peut jouer un rôle déterminant dans la lutte contre les menaces qui se présentent à nous, et elle le jouera.
EN: Our United Nations can and will make a difference in the fight against the threats before us.
… and here is the machine translation by two of my NMT models:
EN: Our United Nations can play a decisive role in combating the threats we face, and it will do so.
EN: Our United Nations can play a decisive role in combating the threats we face, and it will play it.
As you can see, the MT translations are very acceptable; yet if you calculate BLEU against the original sentence, you will get ≈ 15.7 BLEU score only!
So BLEU -just as any other automatic measure- can be used for reference until reaching a pre-agreed score, and you can expect a better translation from a model with an overall higher BLEU score. Still, some companies would finally run a human evaluation, which we might talk about in another article.
Machine Translation Researcher