We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The documentation on DDP currently says:
Using DDP this way has a few disadvantages over torch.multiprocessing.spawn(): All processes (including the main process) participate in training and have the updated state of the model and Trainer state. No multiprocessing pickle errors Easily scales to multi-node training
Using DDP this way has a few disadvantages over torch.multiprocessing.spawn():
torch.multiprocessing.spawn()
Are these meant to be advantages instead of disadvantages?
cc @lantiga @Borda
The text was updated successfully, but these errors were encountered:
No branches or pull requests
📚 Documentation
The documentation on DDP currently says:
Are these meant to be advantages instead of disadvantages?
cc @lantiga @Borda
The text was updated successfully, but these errors were encountered: