-
Notifications
You must be signed in to change notification settings - Fork 226
[feature request] use torch namespace instead of at #31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes we should use |
which cpu and cuda modules are you talking about? |
https://github.com/pytorch/extension-cpp/tree/master/cpp Also, I would like to make this example more up to date in general to help people develop cuda modules easily. Besides, as suggested #12 , I'd like to add usage of Finally, I'd like to update the tutorial code here to reflect those changes, along maybe a mention to How does that sound to you ? Unfortunately, i'm stuck at the moment because of #27 but hopefully I'll find a solution or a working setup where the code compiles. |
A suggested by the title, should we use the
torch::
instead ofat::
in the cpp and cuda modules ?It's suggested here, so maybe this could be updated ?
I can do a PR, just checking if there's a reason not to do it
The text was updated successfully, but these errors were encountered: