-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Conversation
PiperOrigin-RevId: 177371794
…PU training. Modify transformer to keep the packed-together examples from attending to one another. PiperOrigin-RevId: 177481956
…int compatibility bug. PiperOrigin-RevId: 177487398
PiperOrigin-RevId: 177487419
PiperOrigin-RevId: 177505082
…mprovements/fixes PiperOrigin-RevId: 177538074
PiperOrigin-RevId: 177547599
PiperOrigin-RevId: 177554962
PiperOrigin-RevId: 177635374
PiperOrigin-RevId: 177641254
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for the commit author(s). If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. |
@@ -24,7 +24,6 @@ | |||
'tensor2tensor/bin/t2t-datagen', | |||
'tensor2tensor/bin/t2t-decoder', | |||
'tensor2tensor/bin/t2t-make-tf-configs', | |||
'tensor2tensor/bin/t2t-bleu', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the deletion of t2t-bleu intentional?
(I can understand it, if yes.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for noticing that. No, it was not intentional and likely due to a bad internal merge of the PR.
Sorry guys, it was me and halfway intentional. There was a python 2 vs 3 problem and I'm not sure if we want to maintain another binary when there is SacreBLEU now. But we should certainly add the new BLEU as a metric to Tensorboard. In any case: can we get this in and redo the BLEU binary in another PR? Would that be ok Martin? Sorry for the issue! |
Yes, we can redo |
Let's do that. The problem with python2 is that Google uses it internally, so if it's not compatible then all our internal tests and stuff start failing. So yes, let's get this in and then redo the BLEU. Could we have your BLEU score added to metrics, or maybe replace the current approx-BLEU entirely? It's much more useful as it's much closer to the real BLEU than the current approx. What do you think? |
"my" BLEU actually uses "your" |
What about the failing Travis for this PR? |
Yes, that Algorithmic Algebra test is flaky. You can see that master builds fine (no change from this PR). We'll likely end up removing those problems as nobody is using them. |
No description provided.