Skip to content

IPU Integration #7735

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 34 commits into from
Closed

IPU Integration #7735

wants to merge 34 commits into from

Conversation

SeanNaren
Copy link
Contributor

What does this PR do?

Fixes #<issue_number>

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

Make sure you had fun coding 🙃

return acc

def validation_epoch_end(self, outputs) -> None:
self.log('val_acc', torch.stack(outputs).mean(), prog_bar=True)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to clear up why this is here, not in the step itself (the step functions are jitted, and the outputs are collated from all devices, so mean averaging etc cannot happen within the functions)

@codecov
Copy link

codecov bot commented May 27, 2021

Codecov Report

Merging #7735 (61d2014) into master (41be61c) will decrease coverage by 6%.
The diff coverage is 48%.

❗ Current head 61d2014 differs from pull request most recent head d76f491. Consider uploading reports for the commit d76f491 to get more accurate results

@@           Coverage Diff           @@
##           master   #7735    +/-   ##
=======================================
- Coverage      93%     87%    -6%     
=======================================
  Files         202     205     +3     
  Lines       13121   13372   +251     
=======================================
- Hits        12154   11623   -531     
- Misses        967    1749   +782     

Comment on lines +157 to +167
def on_reset_train_dataloader(self, dataloader) -> Union[Iterable, DataLoader]:
return self.process_dataloader(dataloader)

def on_reset_val_dataloader(self, dataloader) -> Union[Iterable, DataLoader]:
return self.process_dataloader(dataloader)

def on_reset_test_dataloader(self, dataloader) -> Union[Iterable, DataLoader]:
return self.process_dataloader(dataloader)

def on_reset_predict_dataloader(self, dataloader) -> Union[Iterable, DataLoader]:
return self.process_dataloader(dataloader)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to be pulled into the base plugin in a separate PR if we're comfortable with adding these hooks. Basically the reason these have been introduced is process_dataloader assumes that the dataset size doesn't change. If the size does change, the progress bar size is messed up.

These hooks happen early enough in the code that progress bar is correct. I also looked into moving process_dataloader but this is finnicky.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of the multiple hooks, the method process_dataloader could simply be the one that goes into the base class.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was some confusion here; process_dataloader already exists in the TrainingTypePlugin

from pytorch_lightning.plugins.precision.precision_plugin import PrecisionPlugin


class IPUPrecisionPlugin(PrecisionPlugin):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@awaelchli said before the next boilerplate precision plugin we should refactor to have the PrecisionPlugin inside of the TrainingTypePlugin, but maybe we can allow one more if this doesn't happen in time :P

precision = self.lightning_module.trainer.accelerator.precision_plugin.precision
precision = 16 if self.half else precision

model = LightningIPUModule(self.lightning_module, precision)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We wrap the model in so many different places (for rerouting forward to the steps in DDP for example). Should we maybe do a wrapping like this all the time? So that we always reroute forward and are more consistent in our internals? (Not in this PR, just a general consideration)

cc @awaelchli @ananthsub

@Lightning-AI Lightning-AI deleted a comment from pep8speaks Jun 3, 2021
@SeanNaren SeanNaren changed the title [WIP] Acc IPU Integration Jun 4, 2021
SeanNaren added 3 commits June 7, 2021 11:40
# Conflicts:
#	.azure-pipelines/ipu-tests.yml
#	pytorch_lightning/plugins/training_type/training_type_plugin.py
#	pytorch_lightning/trainer/data_loading.py
#	pytorch_lightning/trainer/trainer.py
#	pytorch_lightning/trainer/training_loop.py
# Conflicts:
#	pytorch_lightning/accelerators/accelerator.py
#	pytorch_lightning/plugins/training_type/training_type_plugin.py
@SeanNaren
Copy link
Contributor Author

SeanNaren commented Jun 7, 2021

EDIT: closed and opened a new PR since a lot has changed :)

@SeanNaren SeanNaren closed this Jun 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants