Skip to content

[feature request] use torch namespace instead of at #31

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ClementPinard opened this issue Apr 11, 2019 · 4 comments
Closed

[feature request] use torch namespace instead of at #31

ClementPinard opened this issue Apr 11, 2019 · 4 comments

Comments

@ClementPinard
Copy link
Contributor

A suggested by the title, should we use the torch:: instead of at:: in the cpp and cuda modules ?

It's suggested here, so maybe this could be updated ?

I can do a PR, just checking if there's a reason not to do it

@soumith
Copy link
Member

soumith commented Apr 11, 2019

checking with @gchanan and @yf225 , is this fine to do?

@yf225
Copy link

yf225 commented Apr 11, 2019

Yes we should use torch:: in the cpp and cuda modules as we are deprecating the at:: namespace in the public API. @ClementPinard We would love to have PR contribution on it. Thanks!

@gchanan
Copy link

gchanan commented Apr 11, 2019

which cpu and cuda modules are you talking about?

@ClementPinard
Copy link
Contributor Author

https://github.com/pytorch/extension-cpp/tree/master/cpp
https://github.com/pytorch/extension-cpp/tree/master/cuda

Also, I would like to make this example more up to date in general to help people develop cuda modules easily.
More specificly, I'd like to solve the fmax issue that seems to bother many people since the beginning, and seems to be solvable by a scalar_t cast. (see #27 , this does not seem to be an environment error).

Besides, as suggested #12 , I'd like to add usage of .packed_accessor in the cuda kernel.

Finally, I'd like to update the tutorial code here to reflect those changes, along maybe a mention to packed_accessor in the tensor basics doc here

How does that sound to you ? Unfortunately, i'm stuck at the moment because of #27 but hopefully I'll find a solution or a working setup where the code compiles.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants