Skip to content

Commit 91a9d6e

Browse files
refine code & docs
1 parent 28a4707 commit 91a9d6e

File tree

3 files changed

+54
-69
lines changed

3 files changed

+54
-69
lines changed

deploy/python_serving/pipeline_rpc_client.py

+1-58
Original file line numberDiff line numberDiff line change
@@ -18,65 +18,8 @@
1818
from paddle_serving_server.pipeline import PipelineClient
1919

2020
import argparse
21-
import base64
22-
import os
23-
import os.path as osp
2421

25-
import cv2
26-
import numpy as np
27-
28-
29-
def numpy_to_base64(array: np.ndarray) -> str:
30-
"""numpy_to_base64
31-
32-
Args:
33-
array (np.ndarray): input ndarray.
34-
35-
Returns:
36-
bytes object: encoded str.
37-
"""
38-
return base64.b64encode(array).decode('utf8')
39-
40-
41-
def video_to_numpy(file_path: str) -> np.ndarray:
42-
"""decode video with cv2 and return stacked frames
43-
as numpy.
44-
45-
Args:
46-
file_path (str): video file path.
47-
48-
Returns:
49-
np.ndarray: [T,H,W,C] in uint8.
50-
"""
51-
cap = cv2.VideoCapture(file_path)
52-
videolen = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
53-
decoded_frames = []
54-
for i in range(videolen):
55-
ret, frame = cap.read()
56-
# maybe first frame is empty
57-
if ret is False:
58-
continue
59-
img = frame[:, :, ::-1]
60-
decoded_frames.append(img)
61-
decoded_frames = np.stack(decoded_frames, axis=0)
62-
return decoded_frames
63-
64-
65-
def parse_file_paths(input_path: str) -> list:
66-
assert osp.exists(input_path), \
67-
f"{input_path} did not exists!"
68-
if osp.isfile(input_path):
69-
files = [
70-
input_path,
71-
]
72-
else:
73-
files = os.listdir(input_path)
74-
files = [
75-
file for file in files
76-
if (file.endswith(".avi") or file.endswith(".mp4"))
77-
]
78-
files = [osp.join(input_path, file) for file in files]
79-
return files
22+
from utils import numpy_to_base64, parse_file_paths, video_to_numpy
8023

8124

8225
def parse_args():

deploy/python_serving/readme.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -123,18 +123,19 @@ fetch_var {
123123

124124
```
125125
### 服务部署和请求
126-
paddleserving 目录包含了启动 pipeline 服务、C++ serving服务(TODO)和发送预测请求的代码,具体包括:
126+
`python_serving` 目录包含了启动 pipeline 服务、C++ serving服务(TODO)和发送预测请求的代码,具体包括:
127127
```bash
128128
__init__.py
129129
configs/xxx.yaml # 启动pipeline服务的配置文件
130130
pipeline_http_client.py # http方式发送pipeline预测请求的python脚本
131131
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的python脚本
132132
recognition_web_service.py # 启动pipeline服务端的python脚本
133+
utils.py # 储存预测过程中常用的函数,如parse_file_paths, numpy_to_base64, video_to_numpy
133134
```
134135
#### Python Serving
135136
- 进入工作目录:
136137
```bash
137-
cd deploy/paddleserving
138+
cd deploy/python_serving
138139
```
139140

140141
- 启动服务:
@@ -148,10 +149,10 @@ python3.7 recognition_web_service.py -n PPTSM -c configs/PP-TSM.yaml &>log.txt &
148149
- 发送请求:
149150
```bash
150151
# 以http方式的发送预测请求并接受结果
151-
python3.7 pipeline_http_client.py
152+
python3.7 pipeline_http_client.py -i ../../data/example.avi
152153

153154
# 以rpc方式的发送预测请求并接受结果
154-
python3.7 pipeline_rpc_client.py
155+
python3.7 pipeline_rpc_client.py -i ../../data/example.avi
155156
```
156157
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下:
157158

deploy/python_serving/readme_en.md

+48-7
Original file line numberDiff line numberDiff line change
@@ -123,22 +123,63 @@ fetch_var {
123123

124124
```
125125
### Service deployment and requests
126-
The paddleserving directory contains the code for starting the pipeline service, C++ serving service (TODO) and sending prediction requests, including:
126+
The `python_serving` directory contains the code for starting the pipeline service, C++ serving service (TODO) and sending prediction requests, including:
127127
```bash
128128
__init__.py
129-
configs/xxx.yaml # start the configuration file of the pipeline service
130-
pipeline_http_client.py # python script for sending pipeline prediction request via http
131-
pipeline_rpc_client.py # python script for sending pipeline prediction request in rpc mode
132-
recognition_web_service.py # python script that starts the pipeline server
129+
configs/xxx.yaml # start the configuration file of the pipeline service
130+
pipeline_http_client.py # python script for sending pipeline prediction request via http
131+
pipeline_rpc_client.py # python script for sending pipeline prediction request in rpc mode
132+
recognition_web_service.py # python script that starts the pipeline server
133+
utils.py # common functions used in inference, such as parse_file_paths, numpy_to_base64, video_to_numpy
133134
```
134135
#### Python Serving
135136
- Go to the working directory:
136137
```bash
137-
cd deploy/paddleserving
138+
cd deploy/python_serving
138139
```
139140

140141
- Start the service:
141142
```bash
142143
# Start in the current command line window and stay in front
143144
python3.7 recognition_web_service.py -n PPTSM -c configs/PP-TSM.yaml
144-
# Start in the background, the logs printed during the process will be redirected and saved to lo
145+
# Start in the background, the logs printed during the process will be redirected and saved to log.txt
146+
python3.7 recognition_web_service.py -n PPTSM -c configs/PP-TSM.yaml &>log.txt &
147+
```
148+
149+
- send request:
150+
```bash
151+
# Send a prediction request in http and receive the result
152+
python3.7 pipeline_http_client.py -i ../../data/example.avi
153+
154+
# Send a prediction request in rpc and receive the result
155+
python3.7 pipeline_rpc_client.py -i ../../data/example.avi
156+
```
157+
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
158+
159+
```bash
160+
# http method print result
161+
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['archery']", '[0.9907388687133789]'], 'tensors ': []}
162+
163+
# The result of printing in rpc mode
164+
PipelineClient::predict pack_data time:1645631086.764019
165+
PipelineClient::predict before time:1645631086.8485317
166+
key: "label"
167+
key: "prob"
168+
value: "[\'archery\']"
169+
value: "[0.9907388687133789]"
170+
```
171+
172+
## FAQ
173+
**Q1**: No result is returned after the request is sent or an output decoding error is prompted
174+
175+
**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and sending the request. The command to close the proxy is:
176+
```
177+
unset https_proxy
178+
unset http_proxy
179+
```
180+
181+
**Q2**: There is no response after the server is started, and it has been stopped at `start proxy service`
182+
183+
**A2**: It is likely that a problem was encountered during the startup process. You can view the detailed error message in the `./deploy/python_serving/PipelineServingLogs/pipeline.log` log file
184+
185+
For more service deployment types, such as `RPC prediction service`, you can refer to Serving's [github official website](https://github.com/PaddlePaddle/Serving/tree/v0.7.0/examples)

0 commit comments

Comments
 (0)