Skip to content

Commit 8521f02

Browse files
authored
Update quick start codes in README.md (#10431)
Signed-off-by: Zhang Jun <[email protected]>
1 parent bc06b8a commit 8521f02

File tree

1 file changed

+8
-7
lines changed

1 file changed

+8
-7
lines changed

README.md

+8-7
Original file line numberDiff line numberDiff line change
@@ -206,13 +206,14 @@ pip install --pre --upgrade paddlenlp -f https://www.paddlepaddle.org.cn/whl/pad
206206
PaddleNLP 提供了方便易用的 Auto API,能够快速的加载模型和 Tokenizer。这里以使用 `Qwen/Qwen2-0.5B` 模型做文本生成为例:
207207

208208
```python
209-
>>> from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
210-
>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
211-
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16")
212-
>>> input_features = tokenizer("你好!请自我介绍一下。", return_tensors="pd")
213-
>>> outputs = model.generate(**input_features, max_length=128)
214-
>>> print(tokenizer.batch_decode(outputs[0], skip_special_tokens=True))
215-
['我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?']
209+
from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
210+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
211+
# if using CPU, please change float16 to float32
212+
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16")
213+
input_features = tokenizer("你好!请自我介绍一下。", return_tensors="pd")
214+
outputs = model.generate(**input_features, max_new_tokens=128)
215+
print(tokenizer.batch_decode(outputs[0], skip_special_tokens=True))
216+
# ['我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?']
216217
```
217218

218219
### 大模型预训练

0 commit comments

Comments
 (0)