Skip to content

Commit 87bdc84

Browse files
authored
Merge branch 'main' into patch-2
2 parents adde868 + 9633e5f commit 87bdc84

File tree

3 files changed

+25
-15
lines changed

3 files changed

+25
-15
lines changed

Diff for: .github/scripts/docathon-label-sync.py

+3
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,9 @@ def main():
1414
repo = g.get_repo(f'{repo_owner}/{repo_name}')
1515
pull_request = repo.get_pull(pull_request_number)
1616
pull_request_body = pull_request.body
17+
# PR without description
18+
if pull_request_body is None:
19+
return
1720

1821
# get issue number from the PR body
1922
if not re.search(r'#\d{1,5}', pull_request_body):
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
Finetuning Torchvision Models
2+
=============================
3+
4+
This tutorial has been moved to https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
5+
6+
It will redirect in 3 seconds.
7+
8+
.. raw:: html
9+
10+
<meta http-equiv="Refresh" content="3; url='https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html'" />

Diff for: prototype_source/fx_graph_mode_quant_guide.rst

+12-15
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
**Author**: `Jerry Zhang <https://github.com/jerryzh168>`_
55

66
FX Graph Mode Quantization requires a symbolically traceable model.
7-
We use the FX framework (TODO: link) to convert a symbolically traceable nn.Module instance to IR,
7+
We use the FX framework to convert a symbolically traceable nn.Module instance to IR,
88
and we operate on the IR to execute the quantization passes.
99
Please post your question about symbolically tracing your model in `PyTorch Discussion Forum <https://discuss.pytorch.org/c/quantization/17>`_
1010

@@ -22,16 +22,19 @@ You can use any combination of these options:
2222
b. Write your own observed and quantized submodule
2323

2424

25-
####################################################################
2625
If the code that is not symbolically traceable does not need to be quantized, we have the following two options
2726
to run FX Graph Mode Quantization:
28-
1.a. Symbolically trace only the code that needs to be quantized
27+
28+
29+
Symbolically trace only the code that needs to be quantized
2930
-----------------------------------------------------------------
3031
When the whole model is not symbolically traceable but the submodule we want to quantize is
3132
symbolically traceable, we can run quantization only on that submodule.
33+
3234
before:
3335

3436
.. code:: python
37+
3538
class M(nn.Module):
3639
def forward(self, x):
3740
x = non_traceable_code_1(x)
@@ -42,6 +45,7 @@ before:
4245
after:
4346

4447
.. code:: python
48+
4549
class FP32Traceable(nn.Module):
4650
def forward(self, x):
4751
x = traceable_code(x)
@@ -69,8 +73,7 @@ Note if original model needs to be preserved, you will have to
6973
copy it yourself before calling the quantization APIs.
7074

7175

72-
#####################################################
73-
1.b. Skip symbolically trace the non-traceable code
76+
Skip symbolically trace the non-traceable code
7477
---------------------------------------------------
7578
When we have some non-traceable code in the module, and this part of code doesn’t need to be quantized,
7679
we can factor out this part of the code into a submodule and skip symbolically trace that submodule.
@@ -134,8 +137,7 @@ quantization code:
134137
135138
If the code that is not symbolically traceable needs to be quantized, we have the following two options:
136139

137-
##########################################################
138-
2.a Refactor your code to make it symbolically traceable
140+
Refactor your code to make it symbolically traceable
139141
--------------------------------------------------------
140142
If it is easy to refactor the code and make the code symbolically traceable,
141143
we can refactor the code and remove the use of non-traceable constructs in python.
@@ -167,15 +169,10 @@ after:
167169
return x.permute(0, 2, 1, 3)
168170
169171
170-
quantization code:
171-
172172
This can be combined with other approaches and the quantization code
173173
depends on the model.
174174

175-
176-
177-
#######################################################
178-
2.b. Write your own observed and quantized submodule
175+
Write your own observed and quantized submodule
179176
-----------------------------------------------------
180177

181178
If the non-traceable code can’t be refactored to be symbolically traceable,
@@ -207,8 +204,8 @@ non-traceable logic, wrapped in a module
207204
class FP32NonTraceable:
208205
...
209206
210-
211-
2. Define observed version of FP32NonTraceable
207+
2. Define observed version of
208+
FP32NonTraceable
212209

213210
.. code:: python
214211

0 commit comments

Comments
 (0)