-
Notifications
You must be signed in to change notification settings - Fork 3.6k
How can I see approx_bleu on validation set? #650
Comments
I am not sure what is your main question (problem). |
@martinpopel |
Yes. |
I used to train translation model with T2T(version is 1.0.14). When it began to evaluate on validation set,the log would output the information like this: But,when I use T2T(version is 1.5.5) the information output during evaluating like this: where is approx_bleu now? |
I can confirm that |
@stefan-it It's a bug and I have fixed it now. |
I can confirm that |
Hello, my problem is also enzh and i use transformer_base_single_gpu. However, i use my own dataset and the size of dataset is about 600W. My issue is approx_blue_score is about 20.x, but when i run t2t-bleu, i found the bleu-uncased and bleu-cased are both 5.x. I dont understand why there are huge different between approx_blue_score and bleu-uncased/bleu-uncased. Thank you~ |
|
Simple question - what script should we run to the get the approx_bleu for a given checkpoint (I'm asking because I want to compare the quality of a single checkpoint against an averaged checkpoint) |
Hello , I use T2T for translation task, it's version is 1.5.5.
The setting I use as follows:
PROBLEM=translate_enzh_wmt32k
MODEL=transformer
HPARAMS=transformer_base_single_gpu
I used t2t-trainer.py to train a model. When it evaluate on validation set, it output the information:
"loss = 8.52209, metrics-translate_enzh_wmt32k/neg_log_perplexity = -9.75649"
When I rerun t2t-trainer.py,It outputs the information about loss and accuracy(or any metrics else) when evaluate on validation set ,why?
It output the matrics randomly ? How can I see approx_bleu on validation set when evaluate ?
The text was updated successfully, but these errors were encountered: