-
Notifications
You must be signed in to change notification settings - Fork 226
Half Tensor Dispatch compatibility ? #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
This makes AT_DISPATCH_ALL_TYPES_AND_HALF valid outside of the at namespace. See pytorch/extension-cpp#15
Thanks for the rapid answer ! I commented your PR. In the mean time, I applied your recommandations, and it works, thanks ! However I initially thought that your "2." comment was a general case, like "replace all |
Summary: This makes AT_DISPATCH_ALL_TYPES_AND_HALF valid outside of the at namespace. See pytorch/extension-cpp#15 Pull Request resolved: #9848 Differential Revision: D9006921 Pulled By: colesbury fbshipit-source-id: a6e4f097a9d6fb85c921e1c9b9ea25d0f2db06dc
Summary: This makes AT_DISPATCH_ALL_TYPES_AND_HALF valid outside of the at namespace. See pytorch/extension-cpp#15 Pull Request resolved: pytorch/pytorch#9848 Differential Revision: D9006921 Pulled By: colesbury fbshipit-source-id: a6e4f097a9d6fb85c921e1c9b9ea25d0f2db06dc
@ClementPinard someone else reported issues with fmax being interpreted as |
It's just a matter of operator overloading and implicit conversions. C++ allows an argument to undergo one implicit conversion to satisfy an overload, but not two. Half has an implicit conversion to float. float has an implicit conversion to double. But there's no implicit conversion from Half to double directly. You can see the list of overloads for fmax here: https://en.cppreference.com/w/cpp/numeric/math/fmax
There might be a performance penalty of using doubles here, I'm not sure. You could probably change all the 0.0 and 1.0 to 0.0f and 1.0f. |
Summary: This makes AT_DISPATCH_ALL_TYPES_AND_HALF valid outside of the at namespace. See pytorch/extension-cpp#15 Pull Request resolved: pytorch#9848 Differential Revision: D9006921 Pulled By: colesbury fbshipit-source-id: a6e4f097a9d6fb85c921e1c9b9ea25d0f2db06dc
Summary: This makes AT_DISPATCH_ALL_TYPES_AND_HALF valid outside of the at namespace. See pytorch/extension-cpp#15 Pull Request resolved: pytorch#9848 Differential Revision: D9006921 Pulled By: colesbury fbshipit-source-id: a6e4f097a9d6fb85c921e1c9b9ea25d0f2db06dc
Hi, been tweaking the repo a bit, and wanted to try Half Tensor compatibility
So in the cuda, instead of
AT_DISPATCH_FLOATING_TYPES
here and hereI just changed the dispatch function to
AT_DISPATCH_FLOATING_TYPES_AND_HALF
, naively hoping that everything would work without changing anything else.Unfortunately, I got this error (while only dispatching floating types work) :
Is there something i forgot to do ? apparently the
Half
is not recognized by the compiler like it is forfloat
ordouble
so maybe I need to include a header ? I tried#include <cuda_fp16.h>
,#include <ATen/Half.h>
and#include <ATen/Type.h>
but it didn't work.Thanks !
Clément
The text was updated successfully, but these errors were encountered: