You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bottom line is you should not care about this test, as the float32 precision si enough most of the time. If you really need to have float64 precision, be it to pas the grad_check or anything else, you will need to change the code in The cuda/lltm_cuda_kernem.cu file. Be aware that it might trigger compilation errors with gcc7.
See a solution with arg casting in aforementioned issue in case it happens
I am running it on google colab and python grad_check.py cuda is not passing successfully, others (py, cpp) are passing with no issues.
The text was updated successfully, but these errors were encountered: