-
Notifications
You must be signed in to change notification settings - Fork 52
Add dtype keyword to to_device #647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
My 2c. CuPy has |
One of the things to come out of the Other than that, I do agree that it seems that |
|
I guess it depends on your perspective. PyTorch's
Yes, I agree that the input to The above scikit-learn code basically does def func(x):
xp = array_namespace(x)
y = np.random(...) # because random is not part of the array API
y = xp.asarray(y) # convert y to an xp array
if x.dtype == xp.float32:
y = xp.astype(y, xp.float32) # np.random only generates float64 arrays
y = xp.to_device(x.device)
... Maybe it's just me, but it feels like more function calls than should be necessary to do this. You can't combine the It feels like you'd like to be able to just do |
I understand, what I am saying is I'd think nowhere in the API sets should we allow illegal type casts, be it
Why? We allow downcast along the type ladder, right? Or are you referring to device-specific type support?
I think the biggest usability problem is on NumPy. Both CuPy and PyTorch allow generating random numbers with the specified dtype, but NumPy does not. It's been very annoying to me too, but unfortunately since we are not standardizing RNGs, we can't force NumPy to change via the array API standard 😢 (FYI, for generating random complex arrays all libraries are bad.) This is the root cause for us to considering escape hatches and API combos like |
My interpretation of
is that downcasts are not allowed. Because a down cast is not a type promotion. |
Aaron, you're right. Downcast would be problematic for values outside the representable range. I lost my mind today... OK so no downcast is allowed in principle. Are we considering exceptions? |
Alternatively, |
I think I kinda like that |
gh-665 implements the |
This is somewhat tangentially related to the discussions at #645. Right now to_device() doesn't have a dtype paramter, but it could be useful for it to have one. The torch.to function does have one. I'm not completely clear about cupy so @leofang would have to comment.
One reason is that certain devices might not support the existing array dtype. We also need to specify what happens in this case (likely should be an error).
This code in scikit-learn is also an example of where this would be useful https://github.com/scikit-learn/scikit-learn/pull/26315/files/42524bd42900d8ea5f4a334780387a72c6f9580d#diff-86c94a3ca33490c6190f488f5d40b01bf0fd29be36da0b4497ef0da1fda4148a. That code simultaneously converts an array to float32. It would presumably be more efficient to do this in one step instead of two.
The text was updated successfully, but these errors were encountered: