You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I failed to compile the cuda code: python setup.py install and I'm rather surprised that this issues has not been brought up before. Here's the error message:
/usr/local/cuda/bin/nvcc -I/home/maxjiang/software/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/maxjiang/software/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/maxjiang/software/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/maxjiang/software/anaconda3/include/python3.6m -c lltm_cuda_kernel.cu -o build/temp.linux-x86_64-3.6/lltm_cuda_kernel.o -DTORCH_EXTENSION_NAME=lltm_cuda -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options '-fPIC' -std=c++11
lltm_cuda_kernel.cu(54): error: calling a __host__ function("std::fmax<double, float> ") from a __global__ function("_NV_ANON_NAMESPACE::lltm_cuda_forward_kernel<float> ") is not allowed
lltm_cuda_kernel.cu(54): error: identifier "std::fmax<double, float> " is undefined in device code
2 errors detected in the compilation of "/tmp/tmpxft_00003be3_00000000-6_lltm_cuda_kernel.cpp1.ii".
Most of this is probably irrelevant except for gcc version:
OS: Ubuntu 16.04
PyTorch version: 0.4.1
How you installed PyTorch (conda, pip, source): conda
Python version: 3.6.5
CUDA/cuDNN version: 9.2
GPU models and configuration: Titan X (Pascal)
GCC version (if compiling from source): 7.1.0
Here's my hacky fix that worked, by simply wrapping scalar_t around the doubles. Not sure this is the most elegant solution.
Oh sorry I didn't notice #14. Though I'm not sure what the suggested fix was in #14 is. It seems that OP's suggested fix was to recompile Pytorch from scratch? I didn't go through recompiling Pytorch from scratch, my fix above by specifically type casting them to be scalar_t seems easier :)
I failed to compile the cuda code:
python setup.py install
and I'm rather surprised that this issues has not been brought up before. Here's the error message:Most of this is probably irrelevant except for gcc version:
Here's my hacky fix that worked, by simply wrapping scalar_t around the doubles. Not sure this is the most elegant solution.
lltm_cuda_kernel.cu
lines 26-29:The text was updated successfully, but these errors were encountered: