Skip to content

Commit 7584f2f

Browse files
Ashish-Soni08svekarssekyondaMeta
authored
Add download instructions for pretrained model in dynamic quantization tutorial (#3379)
* Add download instructions for pretrained model in dynamic quantization tutorial Fixes #3254 - Added wget command to download word_language_model_quantize.pth to improve readability --------- Co-authored-by: Svetlana Karslioglu <[email protected]> Co-authored-by: sekyondaMeta <[email protected]>
1 parent 0353137 commit 7584f2f

File tree

1 file changed

+12
-4
lines changed

1 file changed

+12
-4
lines changed

advanced_source/dynamic_quantization_tutorial.py

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -134,10 +134,18 @@ def tokenize(self, path):
134134
# -----------------------------
135135
#
136136
# This is a tutorial on dynamic quantization, a quantization technique
137-
# that is applied after a model has been trained. Therefore, we'll simply load some
138-
# pretrained weights into this model architecture; these weights were obtained
139-
# by training for five epochs using the default settings in the word language model
140-
# example.
137+
# that is applied after a model has been trained. Therefore, we'll simply
138+
# load some pretrained weights into this model architecture; these
139+
# weights were obtained by training for five epochs using the default
140+
# settings in the word language model example.
141+
#
142+
# Before running this tutorial, download the required pre-trained model:
143+
#
144+
# .. code-block:: bash
145+
#
146+
# wget https://s3.amazonaws.com/pytorch-tutorial-assets/word_language_model_quantize.pth
147+
#
148+
# Place the downloaded file in the data directory or update the model_data_filepath accordingly.
141149

142150
ntokens = len(corpus.dictionary)
143151

0 commit comments

Comments
 (0)