We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upsample_bilinear2d_backward lowering on GPU
We are running torch-xla with CUDA backend; while upsample_bilinear2d is lowered on GPU, upsample_bilinear2d_backward is not. https://github.com/pytorch/xla/blob/master/torch_xla/csrc/aten_xla_type.cpp#L3615
Support upsample_bilinear2d_backward lowering on GPU in addition to TPU.
It seems that GPU was previously added for upsample_bilinear2d: #3990, wondering if it's possible to do the same for upsample_bilinear2d_backward.
N/A
The text was updated successfully, but these errors were encountered:
Thank you for filing this issue. Could you confirm that we are fallbacking by printing metrics.executed_fallback_ops()?
metrics.executed_fallback_ops()
Sorry, something went wrong.
met.metrics_report():
met.metrics_report()
Counter: aten::_local_scalar_dense Value: 12 Counter: aten::upsample_bilinear2d_backward Value: 90
metrics.executed_fallback_ops():
['aten::upsample_bilinear2d_backward', 'aten::_local_scalar_dense']
Thank you for checking. I will take a look at it.
align_corners=False
upsample_bilinear2d_backward
ysiraichi
Successfully merging a pull request may close this issue.
🚀 Feature
upsample_bilinear2d_backward lowering on GPU
Motivation
We are running torch-xla with CUDA backend; while upsample_bilinear2d is lowered on GPU, upsample_bilinear2d_backward is not.
https://github.com/pytorch/xla/blob/master/torch_xla/csrc/aten_xla_type.cpp#L3615
Pitch
Support upsample_bilinear2d_backward lowering on GPU in addition to TPU.
It seems that GPU was previously added for upsample_bilinear2d: #3990, wondering if it's possible to do the same for upsample_bilinear2d_backward.
Alternatives
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: