-
Notifications
You must be signed in to change notification settings - Fork 473
replace unsound use of transmute with u32::from_be_bytes #1470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -57,7 +53,10 @@ impl ToColor for u32 { | |||
impl ToColor for isize { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for fixing this up. however, this seems... very weird to me. when would this be used?
consider instead:
- removing, as it looks like nothing relies on it but please confirm
- make this conversion always succeed. just reinterpret the 4 byte thing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment on the impl block suggests that users could write hex colors as integers. I kinda get why it panics if the integer is not a valid color, but u32 could have been used instead to let users know at compile time.
I chose to just fix the UB/bug and not change the intended behavior, as a non breaking change.
it looks like nothing relies on it
It is public API, users coud rely on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think making a breaking change here is ok:
u32::try_from could fail if the i32 is negative. and with two's complement, the bit pattern for a negative i32 is a valid color (it should not fail in theses cases). what if I want a lot of red (most significant bit)? this impl just doesn't make any sense
I strongly suggest its removal or doing a reinterpret cast i32 -> u32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right I did not think about isize
being too small when it is only 4 bytes (32 bit systems). I assumed there u32
values always fit in isize
. This should not be dependent on the pointer size in the first place.
Should it just be removed? I think about replacing it with an impl for u32
that does not need to do any checking and thus can't fail at runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Either way, please create an MR - can move the discussion there
also, why be bytes? in the next MR, consider le bytes since that is the dominant protocol on non network devices
or better yet, just do to_ne_bytes. it will be consistent within the program and does not break backwards compatibility with what is already there from before this MR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opened #1473
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be to_be_bytes
if this is to allow users to write 0xrrggbbaa
colors, which should be converted to (0xrr, 0xgg, 0xbb, 0xaa)
by as_rgba()
, independent of the target byte order.
What there was here before this change was not any of to_{n,l,b}e_bytes
, while acting like to_ne_bytes
in practice, the compiler was free to shuffle the bytes in any way it liked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I tried it acted like to_ne_bytes
, but the author of the linked issue had something different.
fixes #1304