-
Notifications
You must be signed in to change notification settings - Fork 138
VLM: Model Tracing Guide #1030
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
VLM: Model Tracing Guide #1030
Changes from all commits
Commits
Show all changes
369 commits
Select commit
Hold shift + click to select a range
3830696
preliminary data pipeline
kylesayrs 1ecaa39
WIP
kylesayrs 9aa9679
delete unnecessary files
kylesayrs 7e6fe17
Merge remote-tracking branch 'origin' into kylesayrs/gptq-partition
kylesayrs 034c0b1
Merge branch 'kylesayrs/gptq-hooks' into kylesayrs/gptq-partition
kylesayrs a62617c
clean up CustomDataset
kylesayrs 57b5e02
chchchchanges
kylesayrs fa317fd
wip: use rename to processor, going through tests
kylesayrs f3f5875
remove labels from calibration dataset rather than assuming that all …
kylesayrs 58c3afe
cleanup
kylesayrs 72aecfc
cleanup, etc
kylesayrs 77217fb
Merge remote-tracking branch 'origin' into kylesayrs/cleanup-custom-d…
kylesayrs 4461a3e
fix typehinting
kylesayrs fb33001
add typechecking imports
kylesayrs bf4744a
remove sparseml utilities
kylesayrs 62ae31d
Merge branch 'kylesayrs/remove-sparseml-utilities' into kylesayrs/cle…
kylesayrs 7e516c1
use in model_load
kylesayrs d69106e
Merge branch 'main' into kylesayrs/calculate_offload_default_gpus
kylesayrs 9e33641
remove use of RECIPE FILE NAME
kylesayrs 58c0fba
rename to RECIPE_FILE_NAME, avoid circular import
kylesayrs b28aaae
Merge branch 'kylesayrs/remove-sparseml-utilities' into kylesayrs/cle…
kylesayrs 8d13013
image dataset collation
kylesayrs 17cf9f3
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 163ee8f
cleanup, do not handle case where processor is None
kylesayrs 1180b34
remove qa ignore
kylesayrs ad20ae7
Merge branch 'kylesayrs/remove-sparseml-utilities' into kylesayrs/cle…
kylesayrs c431958
add documentation
kylesayrs b48d55d
add data collator arg
kylesayrs 2d201e0
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 0ed5c2c
use default factor
kylesayrs ca61e90
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 41dd463
wip mllama
kylesayrs 8527e0e
cleanup
kylesayrs 0a8a03f
merge-implement hessian offloading
kylesayrs fc044e2
better concrete arg handling
kylesayrs 4576712
validate flickr
kylesayrs 5276c58
discover bug, tests and multimodal working
kylesayrs dffcbc3
dataset split fallbacks
kylesayrs b3cb229
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 779c9a2
Merge branch 'kylesayrs/dataset-split-fallbacks' into kylesayrs/clean…
kylesayrs 85e3f59
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs e9f150d
move typing
kylesayrs d061567
cleanup, depreciate remove_columns argument
kylesayrs 55a31ca
silently assign tokenizer to processor
kylesayrs c14e40e
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 1aba16d
replace tokenizer with processor
kylesayrs 135e459
Merge branch 'kylesayrs/processor-replaces-tokenizer' into kylesayrs/…
kylesayrs dde2fa7
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 89bda30
defer data collator changes
kylesayrs 0fa4102
reduce warnings
kylesayrs bc505bf
typehinting, add not-implemented error
kylesayrs c91ba77
remove todos
kylesayrs e916936
Delete mllama.py
kylesayrs 0a573a1
update dataset manager api in tests
kylesayrs 853c0a8
typehinting, add not-implemented error
kylesayrs 234ef79
remove todos
kylesayrs 8972dd5
update dataset manager api in tests
kylesayrs acb1a18
Delete examples/multimodal_vision/qwen_vl2.py
kylesayrs 56b5d12
Delete examples/multimodal_vision/mllama.py
kylesayrs 57c293e
WIP: add pixtral
kylesayrs 537c5ab
pixtral working
kylesayrs 15b3508
move to data pipeline
kylesayrs 42b5fc0
disable_hf_hook context
kylesayrs bc33e8e
woof
kylesayrs ca72bbb
change desc
kylesayrs 293640a
fix docstring
kylesayrs 17b3a70
rely on compressed tensors, support offloading
kylesayrs 5e185f2
sequential targets
kylesayrs 4d82180
support match_layers_params
kylesayrs 6a1b2c2
make _update_size private and inferred
kylesayrs f9ab6fc
make a module
kylesayrs 0dc74dd
fallback
kylesayrs 9e07188
implement basic pipeline
kylesayrs ed099ef
balance between gpus
kylesayrs 4bbbc49
add proper ignore list
kylesayrs ae74f45
treat offloaded modules as leaves, treat ignore as sequential target
kylesayrs 31eeb8c
redisable piecewise for vision datasets
kylesayrs 1b24090
implement pipeline fallback
kylesayrs d97ef2b
Merge remote-tracking branch 'origin' into kylesayrs/processor-replac…
kylesayrs e87e019
remove subbatch event
kylesayrs d5c08fb
input device inference
kylesayrs 39ed8ca
do not disable hf hook during tracing
kylesayrs 47ca742
Merge remote-tracking branch 'origin' into kylesayrs/gptq-partition
kylesayrs c1f5cb2
Merge remote-tracking branch 'origin' into kylesayrs/cleanup-custom-d…
kylesayrs 4711e9f
remove import
kylesayrs e468197
use find_nodes
kylesayrs f8591ca
rename piecewise to sequential
kylesayrs cea02d2
add docstring
kylesayrs f1f6c0f
begin sequential pipeline testing
kylesayrs 3b0b49f
remove todos, add tests for sequential pipeline
kylesayrs 2c035b3
move function placement
kylesayrs b93868d
slight partition algorithm change
kylesayrs 146e4be
revert llama3 example
kylesayrs 0e4d8f3
Merge branch 'main' into kylesayrs/dataset-split-fallbacks
kylesayrs b8e867d
Merge branch 'main' into kylesayrs/processor-replaces-tokenizer
kylesayrs ccb007f
remove test, fix default in order to fix tests
kylesayrs e1055b0
bump memory requirements
kylesayrs 70421ed
fix memory and offloading issues
kylesayrs b102bf5
add missing cache file
kylesayrs 229d3ae
make mllama tracable
kylesayrs 4e0b118
write using comprehesion
kylesayrs 7dc4d2a
fix hessian requirements
kylesayrs 377b2a4
implement offloading for tuple
kylesayrs adb1627
add save
kylesayrs ab3fc81
change num samples
kylesayrs 1bf683e
implement intermediates offloading for dataclasses
kylesayrs 8918917
Merge branch 'main' into kylesayrs/processor-replaces-tokenizer
kylesayrs b75fe15
wrap ignore but do not treat as sequential target
kylesayrs aa4a23d
tracable pixtral/mistral
kylesayrs aa532b5
remove double saving
kylesayrs 19e4f97
revert dampening frac
kylesayrs f95b77f
do not cache model outputs to save memory
kylesayrs 2d890db
fix dataclass case, add tests
kylesayrs 7e69b9d
Merge remote-tracking branch 'origin' into kylesayrs/gptq-partition
kylesayrs 4a22032
Remove docstring
kylesayrs 8d72269
Merge branch 'main' into kylesayrs/processor-replaces-tokenizer
kylesayrs a71352a
move IntermediatesCache location
kylesayrs 2d249a2
add fake_sequential
kylesayrs 995cb2d
rename fake_sequential to layer_sequential
kylesayrs e4bca34
pipeline inference
kylesayrs 4a046a5
update docstrings
kylesayrs f24a2af
fix last layer bug
kylesayrs 691bac4
better inference
kylesayrs 1e15d3e
even better inference
kylesayrs a4744d9
do now throw warning for calibration with training
kylesayrs 9617e53
add information about how to silence warning
kylesayrs 3b4cac1
nice
kylesayrs f53a3dd
remove unnecessary warning silencing
kylesayrs f45d0fa
Merge branch 'kylesayrs/processor-replaces-tokenizer', remote-trackin…
kylesayrs 70a2811
Merge branch 'kylesayrs/dataset-split-fallbacks' into kylesayrs/gptq-…
kylesayrs fd151e4
add unmerged thing
kylesayrs d1d42de
fix deleted columns
kylesayrs 92151a1
handle dataset dict case
kylesayrs 4c049db
support torch.nn.Conv2d, silently ignore embeddings
kylesayrs 7667998
handle columns better
kylesayrs f0eb640
fix tokenizer args
kylesayrs af86f45
filter_tokenizer_args
kylesayrs 5567a90
Merge remote-tracking branch 'origin' into kylesayrs/gptq-partition
kylesayrs 0438e17
Merge remote-tracking branch 'origin' into kylesayrs/cleanup-custom-d…
kylesayrs 9b61145
update docstring
kylesayrs 2f65d01
remove unused util
kylesayrs 338d1cb
remove debug
kylesayrs f4fa9c3
more tests
kylesayrs 6bd1721
Merge remote-tracking branch 'origin' into kylesayrs/cleanup-custom-d…
kylesayrs e757e61
remove duplicate file
kylesayrs bdfa3d4
better help texts
kylesayrs cd9dd21
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs f674579
Merge branch 'kylesayrs/calculate_offload_default_gpus' into kylesayr…
kylesayrs f1e1335
remove future notes, todos
kylesayrs e59c2e7
remove skipping patching
kylesayrs 4932ec5
remove skipping for none args
kylesayrs 6b7c11f
revert data split fallbacks
kylesayrs 601cb0e
rvert data split fallbacks
kylesayrs 4123636
propagate oom errors, separate data collators
kylesayrs c1e66e8
apply style, ignore visual on qwen
kylesayrs dc14e95
remove qwen while unsupported
kylesayrs 47249c5
remove smoothquant while unsupported
kylesayrs de40a84
clean up examples
kylesayrs 56ca97c
Merge remote-tracking branch 'origin' into kylesayrs/gptq-partition
kylesayrs 7f6e8cd
handle non-fast tokenizers
kylesayrs 1c8afe4
handle non-fast tokenizers
kylesayrs 3a9816c
address nits, add logging
kylesayrs 7be0c88
add back copyrights
kylesayrs bedbf8c
correctly update helptext
kylesayrs 7c54bed
Merge remote-tracking branch 'origin' into kylesayrs/cleanup-custom-d…
kylesayrs d27dad3
Merge branch 'main' into kylesayrs/cleanup-custom-dataset
dsikka 42f7892
do not remove prompt key
kylesayrs 4139628
add no copyright to hf files
kylesayrs 15fa27d
remove prompt key
kylesayrs ae16da3
do not process tokenized datasets, including adding labels
kylesayrs 9a08725
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 1eb7f83
Merge branch 'main' into kylesayrs/cleanup-custom-dataset
dsikka c3a663a
rename classes so the saved config is the original class
kylesayrs 0d484bf
Merge branch 'main' into kylesayrs/cleanup-custom-dataset
dsikka ddb6fc3
Merge remote-tracking branch 'origin/kylesayrs/cleanup-custom-dataset…
kylesayrs e71f4e5
remove default chat template
kylesayrs 966b96b
Merge branch 'kylesayrs/cleanup-custom-dataset' into kylesayrs/gptq-p…
kylesayrs 0195fab
support llava-1.5 via installing metadata
kylesayrs 148e617
account for models which improperly do not override the abstract methods
kylesayrs 5ae2300
Merge branch 'kylesayrs/patch-mal-models' into kylesayrs/gptq-partition
kylesayrs e5dd582
add ChatGLMForConditionalGeneration
kylesayrs 5303df2
list of unfixable errors
kylesayrs aa16223
Merge branch 'main' into kylesayrs/gptq-partition
dsikka 5124e24
Merge remote-tracking branch 'origin' into kylesayrs/gptq-partition
kylesayrs 14cbc97
add glm license, style
kylesayrs 4ac9018
Merge branch 'main' into kylesayrs/gptq-partition
dsikka ff470b3
add suggestion to use offload_hessians
kylesayrs c1c3eaa
update names and comments
kylesayrs e5af728
change tqdm description, add comment
kylesayrs 8fd93a7
add no vllm copyright to glm
kylesayrs 8e5f693
update comments, remove unnecessary default values
kylesayrs 0499bb1
Merge branch 'main' into kylesayrs/gptq-partition
dsikka c12c9f0
use text kwarg
kylesayrs ba360d7
WIP: provide tracing script
kylesayrs 7ba6f60
rename examples to have _example suffix
kylesayrs c582b00
remove hardcoded value
kylesayrs 435cf0d
update all list
kylesayrs 0d25307
update examples to use w4a16
kylesayrs 9abdea8
llava: clarify changes, undo style changes
kylesayrs 3dca7b3
glm comments, fix isort
kylesayrs f416674
correct typo 'tracable'
kylesayrs 71faee7
mllama: remove unnecessary definitions
kylesayrs 557467b
add keyboard interrupts to list of unfixable errors
kylesayrs e158b9b
mistral: remove unnecessary definitions
kylesayrs dfadc11
remove propagate_error argument
kylesayrs d146771
pipeline docstrings
kylesayrs bb77a44
add gptq lifecycle docstring
kylesayrs 14f5d88
layer sequential helpers docstrings
kylesayrs fde309a
update comments
kylesayrs e6a8fa8
sequential helpers docstrings
kylesayrs 954cd4e
more docstrings
kylesayrs 00309e9
IntermediatesCache docstrings
kylesayrs 57e8f21
free hessians on finalize
kylesayrs 378afb3
remove unnecessary examples
kylesayrs 83b81be
make diff closer to original implementation
kylesayrs b6c0a50
Merge branch 'main' into kylesayrs/gptq-partition
kylesayrs 5363d40
use original mask padding function
kylesayrs ae89688
reduce diff
kylesayrs 1af401f
Merge branch 'kylesayrs/gptq-partition' into kylesayrs/traceability-r…
kylesayrs 8913155
Merge remote-tracking branch 'origin' into kylesayrs/traceability-readme
kylesayrs d1f9352
merge dreggs
kylesayrs 3230f88
fix link
kylesayrs f62dadd
sequential targets and ignore
kylesayrs 5a92be0
guide roadmapping
kylesayrs d906bc5
Defining your own Traceable Model Definitions
kylesayrs eedfc5a
fix links
kylesayrs 7040bdf
WIP
kylesayrs 0161feb
WIP
kylesayrs ea46517
add argparse
kylesayrs 76e6078
WIP: more progress
kylesayrs adadbec
add attempt_trace entrypoint
kylesayrs eef15b4
general readability, typos
kylesayrs 407f325
first draft readme
kylesayrs 50301b7
fix link
kylesayrs 0e3e8bd
Merge branch 'main' into kylesayrs/traceability-readme
kylesayrs feeb67e
partial derivatives are not alphanumeric
kylesayrs d6441f5
rename attempt_trace to trace
kylesayrs 3bd3ca7
Merge remote-tracking branch 'origin' into kylesayrs/traceability-readme
kylesayrs bb7ca2e
Merge branch 'main' into kylesayrs/traceability-readme
kylesayrs 08fad5d
rename to guide, link to guide in warning
kylesayrs 5f23f52
typos
kylesayrs 6c71263
typo
kylesayrs 08f9f79
add summary
kylesayrs 9e6ceb8
Update src/llmcompressor/pipelines/sequential/README.md
kylesayrs 7536b7d
Merge branch 'main' into kylesayrs/traceability-readme
kylesayrs 32dd0e3
Merge branch 'main' into kylesayrs/traceability-readme
dsikka 68586cb
Merge branch 'main' into kylesayrs/traceability-readme
dsikka f3d9162
use modality kwarg
kylesayrs 5547e98
add image descriptions, fix typos
kylesayrs c8659ef
remove mention of sgpt until those changes land
kylesayrs File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
# Sequential Pipeline # | ||
The sequential pipeline is a data pipeline, primarily used for compressing models with the | ||
[GPTQModifier](/src/llmcompressor/modifiers/quantization/gptq/base.py). | ||
|
||
If, when using this pipeline, you encounter a `torch.fx.proxy.TraceError`, see the | ||
[Model Tracing Guide](/src/llmcompressor/transformers/tracing/GUIDE.md). |
Large diffs are not rendered by default.
Oops, something went wrong.
5,319 changes: 5,319 additions & 0 deletions
5,319
src/llmcompressor/transformers/tracing/assets/Llama_3.2-Vision.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,136 @@ | ||
from typing import List, Type, Union, Optional, Dict | ||
|
||
import argparse | ||
|
||
import torch | ||
import transformers | ||
from transformers import AutoProcessor, PreTrainedModel | ||
|
||
from llmcompressor.transformers import tracing | ||
from llmcompressor.utils.pytorch.module import get_no_split_params | ||
from llmcompressor.pipelines.sequential.helpers import trace_subgraphs | ||
from llmcompressor.transformers import DataTrainingArguments, TextGenerationDataset | ||
|
||
|
||
def parse_args(): | ||
parser = argparse.ArgumentParser(description="Trace a model into subgraphs") | ||
parser.add_argument("--model_id", type=str, required=True, help="The stub of the model to load") # noqa: E501 | ||
parser.add_argument("--model_class", type=str, required=True, help="The class name of the model") # noqa: E501 | ||
parser.add_argument("--sequential_targets", type=str, nargs="*", default=None, metavar="TARGET", help="List of targets for sequential tracing") # noqa: E501 | ||
parser.add_argument("--ignore", type=str, nargs="*", default=[], metavar="PATTERN", help="List of patterns to ignore during tracing") # noqa: E501 | ||
parser.add_argument("--modality", type=str, default="text", help="Modality of calibration dataset, defaults to text") # noqa: E501 | ||
return parser.parse_args() | ||
|
||
|
||
def trace( | ||
model_id: str, | ||
model_class: Type[PreTrainedModel], | ||
sequential_targets: Optional[Union[List[str], str]] = None, | ||
ignore: Union[List[str], str] = [], | ||
modality: str = "text", | ||
): | ||
""" | ||
Debug traceability by tracing a pre-trained model into subgraphs | ||
|
||
:param model_id: stub of the model to load | ||
:param model_class: class constructor of the pre-trained model. Can use either | ||
HF transformers classes or `Traceable` classes defined by LLM Compressor | ||
:param sequential_targets: targets for sequential tracing, defaults to automatic | ||
inference | ||
:param ignore: patterns to ignore during tracing | ||
:param modality: data modality for dummy tracing data, defaults to 'text' | ||
|
||
Example usage from CLI | ||
llmcompressor.trace \ | ||
--model_id Qwen/Qwen2-VL-2B-Instruct \ | ||
--model_class Qwen2VLForConditionalGeneration \ | ||
--sequential_targets Qwen2VLDecoderLayer \ | ||
--ignore "lm_head" "re:visual.*" \ | ||
--modality text | ||
""" | ||
# Load model | ||
model = model_class.from_pretrained( | ||
model_id, | ||
device_map="auto", | ||
torch_dtype="auto", | ||
) | ||
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) | ||
print("Loaded model") | ||
|
||
# Prepare sample data | ||
data_args = DataTrainingArguments(**get_dataset_kwargs(modality)) | ||
dataset = TextGenerationDataset.load_from_registry( | ||
data_args.dataset, | ||
data_args=data_args, | ||
split=data_args.splits["calibration"], | ||
processor=processor, | ||
)(add_labels=False) | ||
sample_input = next(iter(dataset)) | ||
sample_input = {k: torch.tensor(v) for k, v in sample_input.items()} | ||
print("Loaded sample data") | ||
|
||
# infer sequential targets | ||
if sequential_targets is None: | ||
sequential_targets = get_no_split_params(model) | ||
if isinstance(sequential_targets, str): | ||
sequential_targets = [sequential_targets] | ||
|
||
# infer ignore | ||
if isinstance(ignore, str): | ||
ignore = [ignore] | ||
|
||
# Attempt trace | ||
print( | ||
"\nAttempting trace\n" | ||
f" model_id={model_id}\n" | ||
f" model_class={model_class.__name__}\n" | ||
f" dataset={data_args.dataset}\n" | ||
f" split={dataset.split}\n" | ||
f" inputs={sample_input.keys()}\n" | ||
f" sequential_targets={sequential_targets}\n" | ||
f" ignore={ignore}\n" | ||
) | ||
subgraphs = trace_subgraphs(model, sample_input, sequential_targets, ignore) | ||
print(f"Successfully traced model into {len(subgraphs)} subgraphs!\n") | ||
|
||
|
||
def get_model_class(model_class: str) -> Type[PreTrainedModel]: | ||
model_cls = getattr(tracing, model_class, getattr(transformers, model_class, None)) | ||
if model_cls is None: | ||
raise ValueError(f"Could not import model class {model_class}") | ||
|
||
return model_cls | ||
|
||
|
||
def get_dataset_kwargs(modality: str) -> Dict[str, str]: | ||
dataset_kwargs = { | ||
"text": { | ||
"dataset": "ultrachat-200k", | ||
"splits": {"calibration": "test_sft[:1]"}, | ||
}, | ||
"vision": { | ||
"dataset": "flickr", | ||
"splits": {"calibration": "test[:1]"}, | ||
}, | ||
} | ||
|
||
if modality not in dataset_kwargs: | ||
raise ValueError(f"Modality must be one of {list(dataset_kwargs.keys())}") | ||
|
||
return dataset_kwargs[modality] | ||
|
||
|
||
def main(): | ||
args = parse_args() | ||
|
||
trace( | ||
model_id=args.model_id, | ||
model_class=get_model_class(args.model_class), | ||
sequential_targets=args.sequential_targets, | ||
ignore=args.ignore, | ||
modality=args.modality, | ||
) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.