-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Add support for Habana accelerator (HPU) #11808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
kaushikb11
merged 183 commits into
Lightning-AI:master
from
jerome-habana:hpu_accelerator
Mar 25, 2022
Merged
Changes from 17 commits
Commits
Show all changes
183 commits
Select commit
Hold shift + click to select a range
f7175c4
Add hpu accelerator support
jerome-habana 7fb871b
Update strategy for optimizer usage
jerome-habana a1a1ca9
Add checkpointing support
jerome-habana 9a6da43
Fix distributed support with hpu
jerome-habana 3e76db9
Enable usage of static_graph with hpu
jerome-habana b43d226
Add HPU tests
jerome-habana 992093d
Add basic hpu_stats monitor
jerome-habana 943be49
Code cleanup
jerome-habana 3015972
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 257d644
Update tests
jerome-habana f1867cd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] c61d68b
Add configurable params for tests
jerome-habana f74a898
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 963cd1e
Enable inference test
jerome-habana 53a5416
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 2de04e8
Resolve issue with hmp params type and load hpu
jerome-habana 0197b9c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] b412638
Move hmp_params to HPUPrecision plugin
jerome-habana e549434
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 1cc0a37
Update habana distributed with ddp subclass
jerome-habana aeda681
Add hpu backend, datatype checks
jerome-habana fe32865
Merge branch 'master' into hpu_accelerator
jerome-habana f9b0c5f
Merge branch 'master' into hpu_accelerator
jerome-habana 123112d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] ede68eb
Remove unused param for 'on_train_batch_end' in hpu test
jerome-habana 262343a
Merge branch 'master' into hpu_accelerator
jerome-habana 3a029c1
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 0a959f0
Addres review comments
jerome-habana 1434299
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 400ea77
Address review comments
jerome-habana 4146bab
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] f5cb696
remove deprecated logging
jerome-habana d3cd6b1
Merge branch 'master' into hpu_accelerator
jerome-habana 448ed77
Fix imports for failing CI
kaushikb11 10b190f
fix str to_device section in converting.rst (#12243)
awaelchli c17c62b
Disable tuner with distributed strategies (#12179)
rohitgr7 28bc4f0
Add callout items to the Docs landing page (#12196)
kaushikb11 97e1d28
Integrate global step with progress tracking (#11805)
carmocca 5aecf65
Deprecate `LightningDataModule.on_save/load_checkpoint` (#11893)
jjenniferdai 0949599
add Azure HPU agent (#12258)
Borda 4bd5034
Add `LightningCLI(auto_registry)` (#12108)
carmocca bd76456
Drop PyTorch 1.7 testing from the CI (#12191)
krshrimali 80b8d01
Have the outputs match the loops format (#12182)
carmocca c168db5
Address review comments
jerome-habana 831a672
Review comment :Make use of Boring model
jerome-habana 328329e
Update stats example trainer params
jerome-habana c8e331e
Correct flake8 errors
jerome-habana 9a71bdc
Remove docstring examples
jerome-habana 8efed0b
Update hpu-tests.yml
raoakarsha 90409a2
prune
Borda 5bbc6dc
Update hpu-tests.yml
Borda 85f535b
Apply suggestions from code review
Borda 75227d9
hwinfo
Borda 711bbf3
Override mypy warnings
jerome-habana bc174f6
Update test and requirements file
jerome-habana b28c0ce
Remove hpu stats monitor and deprecated API's
jerome-habana 3c08bf5
Update non-hpu tests
jerome-habana f857721
Add hpu-tests.yml and run_hpu_tests.py to support HPU Testing
Borda a2b2cb1
Merge branch 'master' into hpu_accelerator
jerome-habana 7cb34bc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] f6baf69
Add exception for non-hpu tests
jerome-habana 21fc9a4
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 3665ffc
Throw exception when accelerator is not present
jerome-habana e0b4611
Resolve mypy and error message
jerome-habana 545ab6a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 96ed1cd
Disable hpu pl examples on CPU
jerome-habana c44b017
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 410875c
Address review comments
jerome-habana 8efe56f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 073b170
Add documentation for habana gaudi accelerator (HPU)
jerome-habana 7bdcaf6
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] da1037a
Update test code syntax
jerome-habana 5e7af01
Mitigate duplicate label error
jerome-habana 70d6993
Add hpu to toctree
jerome-habana 5061d71
Update pytorch_lightning/plugins/precision/hpu_precision.py
kaushikb11 f6c36ce
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 798f137
Update _broadvast_object_list
kaushikb11 5e098cb
Update broadcast for HPUParallelStrategy
kaushikb11 093056c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 0563310
Update reference links
kaushikb11 65886ba
Update Strategies
kaushikb11 d837ef3
Address reviews
kaushikb11 37e0000
Address reviews
kaushikb11 07c60b4
Address reviews
jerome-habana 394d9e2
Merge branch 'master' into hpu_accelerator
jerome-habana 12dc3ca
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 3064544
Remove too many sections from sidebar
akihironitta 7c7721d
Fix invalid formatting and links
akihironitta cc71c7a
Merge branch 'master' into hpu_accelerator
kaushikb11 e6eaa9f
Address reviews for HPUCHeckpointIO
kaushikb11 33beabd
Address reviews for HPU + AcceleratorConnector
kaushikb11 759804e
Fix tests
kaushikb11 bda7e36
Address reviews
kaushikb11 bdc19be
Remove setting hpu accelerator by just strategy
kaushikb11 2d34cc5
Remove unnecessary properties for HPU
kaushikb11 c32601a
Fix HPU tests
kaushikb11 f43750e
Move tests
kaushikb11 4e09286
Improve docs
kaushikb11 ab2f595
Improve tests
kaushikb11 549d784
Update Changelog
kaushikb11 ec929df
Fix test for the rigth device type
kaushikb11 c55a82f
Fix tests
kaushikb11 05dcc1c
Fix tests
kaushikb11 150e667
Merge branch 'master' into hpu_accelerator
kaushikb11 f5a333b
Address reviews
kaushikb11 57b9c24
Update plugins
kaushikb11 3dd763c
Update docs/source/accelerators/hpu.rst
kaushikb11 773a7a0
Update HPU mnist example
kaushikb11 9378c87
Update strategy
kaushikb11 9aefcd2
Address reviews
jerome-habana 1f0b187
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 1d30ef9
Add precision tests to azure pipeline
jerome-habana fd9488f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] a4f79fb
Add comments
kaushikb11 a6a336d
Fix argparse
kaushikb11 dca30ee
Remove unnecessary use of PL_TORCH_DISTRIBUTED_BACKEND env variable
kaushikb11 bb8984f
Update pytorch_lightning/strategies/hpu_parallel.py
kaushikb11 4ab35db
Update pytorch_lightning/utilities/distributed.py
kaushikb11 e65a3fb
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] a517942
Address review
jerome-habana d89815d
Address reviews
kaushikb11 0238b45
Update document
jerome-habana 4f44ea9
Improve Habana doc
kaushikb11 f332e1c
Improve Habana doc
kaushikb11 81202c6
Improve Habana doc
kaushikb11 503df4e
Update pytorch_lightning/trainer/connectors/accelerator_connector.py
kaushikb11 e6af417
Update links
kaushikb11 2bd4a66
Merge branch 'hpu_accelerator' of https://github.com/jerome-habana/py…
kaushikb11 67e710e
Update precision sections
kaushikb11 1df801b
Update doc
kaushikb11 9152114
Add defaults to hmp_params for Precision Plugin
kaushikb11 9846b6a
Update .azure-pipelines/run_hpu_tests.py
kaushikb11 e86becf
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] d165c44
Apply suggestions from code review
kaushikb11 c76b95f
Update docs/source/accelerators/hpu.rst
kaushikb11 bafcb8d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 2d6c6dd
Apply suggestions from code review
kaushikb11 75728b6
Apply suggestions from code review
kaushikb11 68c5281
Update docs/source/accelerators/hpu.rst
kaushikb11 600e1bd
Address reviews
kaushikb11 b03d079
Apply suggestions from code review
kaushikb11 6e4474e
Update API references
kaushikb11 efd9f65
Address reviews regarding precision
kaushikb11 22827f0
Address reviews regarding docs and precision
kaushikb11 e82544c
Update docs/source/accelerators/hpu.rst
kaushikb11 4500a7e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 98ba21f
Apply suggestions from code review
kaushikb11 3c10359
Address reviews & update tests
kaushikb11 6c0dd88
Merge branch 'hpu_accelerator' of https://github.com/jerome-habana/py…
kaushikb11 e137f19
Update testing pipeline & conftest
kaushikb11 a62cfa1
Fix ci
kaushikb11 1078a69
Add device parsing logic for HPUs
kaushikb11 a9dfcf3
Fix device parsing
kaushikb11 4665101
Use the CLI in the example
2ee4bbf
Docs
e9ae312
Merge branch 'master' into hpu_accelerator
kaushikb11 dc3eca7
Update docs/source/accelerators/hpu.rst
kaushikb11 6952125
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 91cced3
Update hmp_params
kaushikb11 0671d2c
Support passing amp_level to HPUPrecision
kaushikb11 522106e
Update HPUAccelerator
kaushikb11 c8b89ea
Update tests
kaushikb11 7d028b1
Fix precision tests
kaushikb11 3c86aff
Update device parsing logic
kaushikb11 3c8e321
Fix tests & address reviews
kaushikb11 dcda0ac
Update run_hpu_tests
kaushikb11 e254cd0
Update CLI test
jerome-habana c452bd2
Fix typing
kaushikb11 4c51b33
Merge branch 'hpu_accelerator' of https://github.com/jerome-habana/py…
kaushikb11 b66c867
Merge branch 'master' into hpu_accelerator
jerome-habana dca6b0f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 98e901d
Enable example test in pipeline
jerome-habana 2860a4e
export path of modules
jerome-habana a297593
Fix test
kaushikb11 9c1fff7
Merge branch 'hpu_accelerator' of https://github.com/jerome-habana/py…
kaushikb11 65f1fb9
Update torch distributed
kaushikb11 2380887
Update strategy
kaushikb11 59ef6fd
Update example
kaushikb11 c02c1ed
Apply suggestions from code review
kaushikb11 beda30c
Address reviews
kaushikb11 eb99e52
Merge branch 'hpu_accelerator' of https://github.com/jerome-habana/py…
kaushikb11 c465a06
Update backend env variable for strategy
kaushikb11 60f2da4
Update backend env variable for strategy
kaushikb11 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
import os | ||
import sys | ||
|
||
import habana_frameworks.torch.core as htcore | ||
import torch | ||
from torch import nn | ||
from torch.nn import functional as F | ||
from torch.utils.data import DataLoader, random_split | ||
from torchvision import transforms | ||
from torchvision.datasets import MNIST | ||
|
||
import pytorch_lightning as pl | ||
from pytorch_lightning.callbacks import HPUStatsMonitor | ||
|
||
|
||
class MNISTModel(pl.LightningModule): | ||
def __init__(self): | ||
super().__init__() | ||
self.l1 = torch.nn.Linear(28 * 28, 10) | ||
|
||
def forward(self, x): | ||
return torch.relu(self.l1(x.view(x.size(0), -1))) | ||
|
||
def training_step(self, batch, batch_nb): | ||
x, y = batch | ||
loss = F.cross_entropy(self(x), y) | ||
return loss | ||
|
||
def configure_optimizers(self): | ||
return torch.optim.Adam(self.parameters(), lr=0.02) | ||
|
||
|
||
# Init our model | ||
mnist_model = MNISTModel() | ||
|
||
# Init DataLoader from MNIST Dataset | ||
train_ds = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()) | ||
train_loader = DataLoader(train_ds, batch_size=32) | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
# TBD: import these keys from hmp | ||
hmp_keys = ["level", "verbose", "bf16_ops", "fp32_ops"] | ||
hmp_params = dict.fromkeys(hmp_keys) | ||
hmp_params["level"] = "O1" | ||
hmp_params["verbose"] = False | ||
hmp_params["bf16_ops"] = "./pl_examples/hpu_examples/simple_mnist/ops_bf16_mnist.txt" | ||
hmp_params["fp32_ops"] = "./pl_examples/hpu_examples/simple_mnist/ops_fp32_mnist.txt" | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
hpu_stats = HPUStatsMonitor(log_save_dir="habana_ptl_log", exp_name="mnist") | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
# Initialize a trainer | ||
trainer = pl.Trainer( | ||
devices=1, | ||
callbacks=[hpu_stats], | ||
max_epochs=1, | ||
precision=32, | ||
hmp_params=hmp_params, | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
default_root_dir="/tmp/", | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
accelerator="hpu", | ||
) | ||
|
||
# Train the model ⚡ | ||
trainer.fit(mnist_model, train_loader) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
linear | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
relu | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
cross_entropy |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
# Copyright The PyTorch Lightning team. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
from typing import Any, Dict, Union | ||
|
||
import torch | ||
|
||
from pytorch_lightning.accelerators.accelerator import Accelerator | ||
|
||
|
||
class HPUAccelerator(Accelerator): | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
"""Accelerator for HPU devices.""" | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
def get_device_stats(self, device: Union[str, torch.device]) -> Dict[str, Any]: | ||
"""HPU device stats aren't supported yet.""" | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
return {} | ||
akihironitta marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
@staticmethod | ||
def auto_device_count() -> int: | ||
"""Get the devices when set to auto.""" | ||
# TBD: make this configurable | ||
return 8 | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
tchaton marked this conversation as resolved.
Show resolved
Hide resolved
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
# Copyright (C) 2021 Habana Labs, Ltd. an Intel Company | ||
# All Rights Reserved. | ||
# | ||
# Unauthorized copying of this file or any element(s) within it, via any medium | ||
# is strictly prohibited. | ||
# This file contains Habana Labs, Ltd. proprietary and confidential information | ||
# and is subject to the confidentiality and license agreements under which it | ||
# was provided. | ||
# | ||
|
||
# Copyright The PyTorch Lightning team. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
""" | ||
hpu Stats Monitor | ||
================= | ||
|
||
Monitor and logs hpu stats during training. | ||
|
||
""" | ||
from typing import Any, Dict, List, Optional, Tuple | ||
|
||
import torch | ||
|
||
import pytorch_lightning as pl | ||
from pytorch_lightning.callbacks.base import Callback | ||
from pytorch_lightning.utilities import rank_zero_only | ||
|
||
|
||
class HPUStatsMonitor(Callback): | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
"""Automatically monitors and logs hpu stats during training stage. | ||
|
||
Args: | ||
save_dir: directory to save the logs. | ||
exp_name: name of the experiment. | ||
|
||
Example:: | ||
|
||
>>> from pytorch_lightning import Trainer | ||
>>> from pytorch_lightning.callbacks import HPUStatsMonitor | ||
>>> hpu_stats = HPUStatsMonitor() | ||
>>> trainer = Trainer(hpus=1, callbacks=[hpu_stats]) | ||
|
||
you can also optionally provide save_dir and exp_name in HPUStatsMonitor. | ||
No need to provide logger in Trainer. | ||
""" | ||
|
||
def __init__(self, log_save_dir: str = "habana_ptl_logs", exp_name: str = "default"): | ||
super().__init__() | ||
self.log_save_dir = log_save_dir | ||
self.exp_name = exp_name | ||
|
||
def on_init_end(self, trainer: "pl.Trainer") -> None: | ||
from pytorch_lightning import loggers as pl_logger | ||
|
||
self.tb_logger = pl_logger.TensorBoardLogger(save_dir=self.log_save_dir, name=self.exp_name) | ||
trainer.logger = self.tb_logger | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
def on_before_backward(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", loss: torch.Tensor) -> None: | ||
pl_module.log("Model_Loss", loss, on_step=True, on_epoch=True, enable_graph=False, logger=True) | ||
jerome-habana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
def on_train_epoch_end( | ||
self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", unused: Optional = None | ||
) -> None: | ||
tensor_board = trainer.logger.experiment | ||
dict = vars(pl_module) | ||
modules = dict["_modules"] | ||
for module_name in modules: | ||
tensor_board.add_histogram(module_name + ".weight", modules[module_name].weight, pl_module.current_epoch) | ||
tensor_board.add_histogram(module_name + ".bias", modules[module_name].bias, pl_module.current_epoch) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.