File tree 1 file changed +8
-7
lines changed
1 file changed +8
-7
lines changed Original file line number Diff line number Diff line change @@ -206,13 +206,14 @@ pip install --pre --upgrade paddlenlp -f https://www.paddlepaddle.org.cn/whl/pad
206
206
PaddleNLP 提供了方便易用的 Auto API,能够快速的加载模型和 Tokenizer。这里以使用 ` Qwen/Qwen2-0.5B ` 模型做文本生成为例:
207
207
208
208
``` python
209
- >> > from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
210
- >> > tokenizer = AutoTokenizer.from_pretrained(" Qwen/Qwen2-0.5B" )
211
- >> > model = AutoModelForCausalLM.from_pretrained(" Qwen/Qwen2-0.5B" , dtype = " float16" )
212
- >> > input_features = tokenizer(" 你好!请自我介绍一下。" , return_tensors = " pd" )
213
- >> > outputs = model.generate(** input_features, max_length = 128 )
214
- >> > print (tokenizer.batch_decode(outputs[0 ], skip_special_tokens = True ))
215
- [' 我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?' ]
209
+ from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
210
+ tokenizer = AutoTokenizer.from_pretrained(" Qwen/Qwen2-0.5B" )
211
+ # if using CPU, please change float16 to float32
212
+ model = AutoModelForCausalLM.from_pretrained(" Qwen/Qwen2-0.5B" , dtype = " float16" )
213
+ input_features = tokenizer(" 你好!请自我介绍一下。" , return_tensors = " pd" )
214
+ outputs = model.generate(** input_features, max_new_tokens = 128 )
215
+ print (tokenizer.batch_decode(outputs[0 ], skip_special_tokens = True ))
216
+ # ['我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?']
216
217
```
217
218
218
219
### 大模型预训练
You can’t perform that action at this time.
0 commit comments