Skip to content

ModuleNotFoundError: No module named 'demo' #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
forhonourlx opened this issue Apr 2, 2019 · 3 comments
Closed

ModuleNotFoundError: No module named 'demo' #4

forhonourlx opened this issue Apr 2, 2019 · 3 comments

Comments

@forhonourlx
Copy link

simon:~/Desktop/pytorch-lightning/demo$ python fully_featured_trainer.py
Traceback (most recent call last):
File "fully_featured_trainer.py", line 20, in
from demo.example_model import ExampleModel
ModuleNotFoundError: No module named 'demo'

@williamFalcon
Copy link
Contributor

fixed! try again

@Lightning-AI Lightning-AI deleted a comment from forhonourlx Apr 3, 2019
@williamFalcon
Copy link
Contributor

wait... fixing again

williamFalcon added a commit that referenced this issue Apr 3, 2019
@forhonourlx
Copy link
Author

Hi William,
@williamFalcon
After change the path,
One more when cuda training...

simon@simon-X299:~/Desktop/pytorch-lightning/demo$ python fully_featured_trainer.py
RUNNING MULTI GPU. GPU ids: ['0']
loading model...
running on gpu...
model built
gpu available: True, used: False
Name Type Params
0 c_d1 Linear 392500
1 c_d1_bn BatchNorm1d 1000
2 c_d1_drop Dropout 0
3 c_d2 Linear 5010
validating...
Caught exception in worker thread expected type torch.FloatTensor but got torch.cuda.FloatTensor
Traceback (most recent call last):
File "/home/simon/anaconda3/lib/python3.6/site-packages/test_tube/argparse_hopt.py", line 29, in optimize_parallel_gpu_private
results = train_function(trial_params)
File "fully_featured_trainer.py", line 36, in main_local
main(hparams, None, None)
File "fully_featured_trainer.py", line 105, in main
trainer.fit(model)
File "/home/simon/anaconda3/lib/python3.6/site-packages/pytorch_lightning/models/trainer.py", line 223, in fit
_ = self.validate(model, self.val_dataloader, max_batches=self.nb_sanity_val_steps)
File "/home/simon/anaconda3/lib/python3.6/site-packages/pytorch_lightning/models/trainer.py", line 154, in validate
for i, data_batch in enumerate(dataloader):
File "/home/simon/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/home/simon/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/home/simon/anaconda3/lib/python3.6/site-packages/torchvision/datasets/mnist.py", line 95, in getitem
img = self.transform(img)
File "/home/simon/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 60, in call
img = t(img)
File "/home/simon/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 163, in call
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/home/simon/anaconda3/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 208, in normalize
tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: expected type torch.FloatTensor but got torch.cuda.FloatTensor

luiscape pushed a commit to luiscape/pytorch-lightning that referenced this issue Jan 17, 2020
Borda added a commit that referenced this issue May 29, 2020
Borda added a commit that referenced this issue Jun 3, 2020
williamFalcon added a commit that referenced this issue Jun 4, 2020
* fixed new amp bugs

* fixed new amp bugs

* fixed new amp bugs

* try exit

* larger dataset

* full mnist

* full mnist

* trainer

* assert

* .05

* .10, #4

* #5

* #5

* #5

* refactor

* abs diff

* speed

* speed

* speed

* speed

Co-authored-by: J. Borovec <[email protected]>
Co-authored-by: Jirka <[email protected]>
justusschock pushed a commit that referenced this issue Jun 29, 2020
* fixed new amp bugs

* fixed new amp bugs

* fixed new amp bugs

* try exit

* larger dataset

* full mnist

* full mnist

* trainer

* assert

* .05

* .10, #4

* #5

* #5

* #5

* refactor

* abs diff

* speed

* speed

* speed

* speed

Co-authored-by: J. Borovec <[email protected]>
Co-authored-by: Jirka <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants