Skip to content

Commit 8645221

Browse files
authored
[Refactor] Refactor Deploy Code (#2881)
* Refactor deploy code * Update RTFormer and FCN_UHRNetW18_Small TIPC configs * Add 'add_rule' interface * Polish README * Refactor infer API
1 parent fb37c0c commit 8645221

20 files changed

+247
-270
lines changed

README_EN.md

+20-20
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ English | [简体中文](README_CN.md)
66
<img src="./docs/images/paddleseg_logo.png" align="middle" width = "500" />
77
</p>
88

9-
**A High-Efficient Development Toolkit for Image Segmentation based on [PaddlePaddle](https://github.com/paddlepaddle/paddle).**
9+
**A High-Efficient Development Toolkit for Image Segmentation Based on [PaddlePaddle](https://github.com/paddlepaddle/paddle).**
1010

1111
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
1212
[![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleSeg.svg)](https://github.com/PaddlePaddle/PaddleSeg/releases)
@@ -23,22 +23,22 @@ English | [简体中文](README_CN.md)
2323

2424
## <img src="./docs/images/seg_news_icon.png" width="20"/> News
2525
<ul class="nobull">
26-
<li>[2022-11-30] :fire: PaddleSeg v2.7 is released! More details in <a href="https://github.com/PaddlePaddle/PaddleSeg/releases">Release Notes</a>.</li>
26+
<li>[2022-11-30] :fire: PaddleSeg v2.7 is released! Check more details in <a href="https://github.com/PaddlePaddle/PaddleSeg/releases">Release Notes</a>.</li>
2727
<ul>
2828
<li>Release <a href="./Matting/">PP-MattingV2</a>, a real-time human matting model with SOTA performance. Compared to previous models, the mean error is reduced by 17.91%, the inference speed is improved by 44.6% on GPU. </li>
29-
<li>Release <a href="./contrib/MedicalSeg/">MedicalSegV2</a>, a superior 3D medical image segmentation solution, including an intelligent annotation toolkit called EISeg-Med3D, several state-of-the-art models and an optimized nnUNet-D with high performance.</li>
30-
<li>Release <a href="./configs/rtformer/">RTFormer</a>, a real-time semantic segmentation model accepted by NeurIPS 2022. RTFormer combines the advantages of CNN and Transformer modules, and it achieves SOTA trade-off between performance and efficiency on several datasets.</li>
29+
<li>Release <a href="./contrib/MedicalSeg/">MedicalSegV2</a>, a superior 3D medical image segmentation solution, including an intelligent annotation toolkit called EISeg-Med3D, several state-of-the-art models, and an optimized nnUNet-D with high performance.</li>
30+
<li>Release <a href="./configs/rtformer/">RTFormer</a>, a real-time semantic segmentation model (the paper has been accepted by NeurIPS 2022). RTFormer combines the advantages of CNN and Transformer modules, and it achieves SOTA trade-off between performance and efficiency on several datasets.</li>
3131
</ul>
32-
<li>[2022-07-20] PaddleSeg v2.6 released a real-time human segmentation SOTA solution <a href="./contrib/PP-HumanSeg">PP-HumanSegV2</a>, a stable-version semi-automatic segmentation annotation <a href="./EISeg">EISeg v1.0</a>, a pseudo label pre-training method PSSL and the source code of PP-MattingV1. </li>
32+
<li>[2022-07-20] PaddleSeg v2.6 released a real-time human segmentation SOTA solution <a href="./contrib/PP-HumanSeg">PP-HumanSegV2</a>, a stable-version semi-automatic segmentation annotation tool <a href="./EISeg">EISeg v1.0</a>, a pseudo label pre-training method PSSL, and the source code of PP-MattingV1. </li>
3333
<li>[2022-04-20] PaddleSeg v2.5 released a real-time semantic segmentation model <a href="./configs/pp_liteseg">PP-LiteSeg</a>, a trimap-free image matting model PP-MattingV1, and an easy-to-use solution for 3D medical image segmentation MedicalSegV1.</li>
34-
<li>[2022-01-20] We release PaddleSeg v2.4 with EISeg v0.4, and PP-HumanSegV1 including open-sourced dataset <a href="./contrib/PP-HumanSeg/paper.md#pp-humanseg14k-a-large-scale-teleconferencing-video-dataset">PP-HumanSeg14K</a>. </li>
34+
<li>[2022-01-20] We release PaddleSeg v2.4 with EISeg v0.4, and PP-HumanSegV1 including an open-sourced dataset <a href="./contrib/PP-HumanSeg/paper.md#pp-humanseg14k-a-large-scale-teleconferencing-video-dataset">PP-HumanSeg14K</a>. </li>
3535

3636
</ul>
3737

3838

3939
## <img src="https://user-images.githubusercontent.com/48054808/157795569-9fc77c85-732f-4870-9be0-99a7fe2cff27.png" width="20"/> Introduction
4040

41-
PaddleSeg is an end-to-end high-efficent development toolkit for image segmentation based on PaddlePaddle, which helps both developers and researchers in the whole process of designing segmentation models, training models, optimizing performance and inference speed, and deploying models. A lot of well-trained models and various real-world applications in both industry and academia help users conveniently build hands-on experiences in image segmentation.
41+
PaddleSeg is an end-to-end high-efficent development toolkit for image segmentation based on PaddlePaddle, which helps both developers and researchers in the whole process of designing segmentation models, training models, optimizing performance and inference speed, and deploying models. A lot of well-trained models and various real-world applications in both industry and academia help users conveniently build hands-on experiences in image segmentation.
4242

4343
<div align="center">
4444
<img src="https://github.com/shiyutang/files/raw/main/teasor_new.gif" width = "800" />
@@ -47,22 +47,22 @@ PaddleSeg is an end-to-end high-efficent development toolkit for image segmentat
4747

4848
## <img src="./docs/images/feature.png" width="20"/> Features
4949

50-
* **High-Performance Model**: Following the state of the art segmentation methods and use the high-performance backbone, we provide 40+ models and 140+ high-quality pre-training models, which are better than other open-source implementations.
50+
* **High-Performance Model**: Following the state of the art segmentation methods and using high-performance backbone networks, we provide 40+ models and 140+ high-quality pre-training models, which are better than other open-source implementations.
5151

52-
* **High Efficiency**: PaddleSeg provides multi-process asynchronous I/O, multi-card parallel training, evaluation, and other acceleration strategies, combined with the memory optimization function of the PaddlePaddle, which can greatly reduce the training overhead of the segmentation model, all this allowing developers to lower cost and more efficiently train image segmentation model.
52+
* **High Efficiency**: PaddleSeg provides multi-process asynchronous I/O, multi-card parallel training, evaluation, and other acceleration strategies, combined with the memory optimization function of the PaddlePaddle, which can greatly reduce the training overhead of the segmentation model, all these allowing developers to train image segmentation models more efficiently and at a lower cost.
5353

54-
* **Modular Design**: We desigin PaddleSeg with the modular design philosophy. Therefore, based on actual application scenarios, developers can assemble diversified training configurations with *data enhancement strategies*, *segmentation models*, *backbone networks*, *loss functions* and other different components to meet different performance and accuracy requirements.
54+
* **Modular Design**: We build PaddleSeg with the modular design philosophy. Therefore, based on actual application scenarios, developers can assemble diversified training configurations with *data augmentation strategies*, *segmentation models*, *backbone networks*, *loss functions*, and other different components to meet different performance and accuracy requirements.
5555

56-
* **Complete Flow**: PaddleSeg support image labeling, model designing, model training, model compression and model deployment. With the help of PaddleSeg, developers can easily finish all taskes.
56+
* **Complete Flow**: PaddleSeg supports image labeling, model designing, model training, model compression, and model deployment. With the help of PaddleSeg, developers can easily finish all tasks in the entire workflow.
5757

5858
<div align="center">
5959
<img src="https://user-images.githubusercontent.com/14087480/176402154-390e5815-1a87-41be-9374-9139c632eb66.png" width = "800" />
6060
</div>
6161

6262
## <img src="./docs/images/chat.png" width="20"/> Community
6363

64-
* If you have any questions, suggestions and feature requests, please create an issues in [GitHub Issues](https://github.com/PaddlePaddle/PaddleSeg/issues).
65-
* Welcome to scan the following QR code and join paddleseg wechat group to communicate with us.
64+
* If you have any questions, suggestions or feature requests, please do not hesitate to create an issue in [GitHub Issues](https://github.com/PaddlePaddle/PaddleSeg/issues).
65+
* Please scan the following QR code to join PaddleSeg WeChat group to communicate with us:
6666
<div align="center">
6767
<img src="https://user-images.githubusercontent.com/48433081/174770518-e6b5319b-336f-45d9-9817-da12b1961fb1.jpg" width = "200" />
6868
</div>
@@ -321,7 +321,7 @@ PaddleSeg is an end-to-end high-efficent development toolkit for image segmentat
321321
| CCNet | ResNet101_OS8 | 80.95 | 3.24 | [yml](./configs/ccnet/) |
322322

323323
Note that:
324-
* Test the inference speed on Nvidia GPU V100: use PaddleInference Python API, enable TensorRT, the data type is FP32, the dimension of input is 1x3x1024x2048.
324+
* We test the inference speed on Nvidia GPU V100. We use PaddleInference Python API with TensorRT enabled. The data type is FP32, and the shape of input tensor is 1x3x1024x2048.
325325

326326
</details>
327327

@@ -344,8 +344,8 @@ Note that:
344344
| SFNet | ResNet18_OS8 | 78.72 | *10.72* | - | [yml](./configs/sfnet/) |
345345

346346
Note that:
347-
* Test the inference speed on Nvidia GPU V100: use PaddleInference Python API, enable TensorRT, the data type is FP32, the dimension of input is 1x3x1024x2048.
348-
* Test the inference speed on Snapdragon 855: use PaddleLite CPP API, 1 thread, the dimension of input is 1x3x256x256.
347+
* We test the inference speed on Nvidia GPU V100. We use PaddleInference Python API with TensorRT enabled. The data type is FP32, and the shape of input tensor is 1x3x1024x2048.
348+
* We test the inference speed on Snapdragon 855. We use PaddleLite CPP API with 1 thread, and the shape of input tensor is 1x3x256x256.
349349

350350
</details>
351351

@@ -364,8 +364,8 @@ Note that:
364364
| MobileSeg | GhostNet_x1_0 | 71.88 | *35.58* | 38.74 | [yml](./configs/mobileseg/) |
365365

366366
Note that:
367-
* Test the inference speed on Nvidia GPU V100: use PaddleInference Python API, enable TensorRT, the data type is FP32, the dimension of input is 1x3x1024x2048.
368-
* Test the inference speed on Snapdragon 855: use PaddleLite CPP API, 1 thread, the dimension of input is 1x3x256x256.
367+
* We test the inference speed on Nvidia GPU V100. We use PaddleInference Python API with TensorRT enabled. The data type is FP32, and the shape of input tensor is 1x3x1024x2048.
368+
* We test the inference speed on Snapdragon 855. We use PaddleLite CPP API with 1 thread, and the shape of input tensor is 1x3x256x256.
369369

370370
</details>
371371

@@ -394,7 +394,7 @@ Note that:
394394
* [Export Inference Model](./docs/model_export.md)
395395
* [Export ONNX Model](./docs/model_export_onnx.md)
396396

397-
* Model Deploy
397+
* Model Deployment
398398
* [Paddle Inference (Python)](./docs/deployment/inference/python_inference.md)
399399
* [Paddle Inference (C++)](./docs/deployment/inference/cpp_inference.md)
400400
* [Paddle Lite](./docs/deployment/lite/lite.md)
@@ -409,7 +409,7 @@ Note that:
409409
* Model Compression
410410
* [Quantization](./docs/deployment/slim/quant/quant.md)
411411
* [Distillation](./docs/deployment/slim/distill/distill.md)
412-
* [Prune](./docs/deployment/slim/prune/prune.md)
412+
* [Pruning](./docs/deployment/slim/prune/prune.md)
413413

414414
* [FAQ](./docs/faq/faq/faq.md)
415415

deploy/python/collect_dynamic_shape.py

+2-8
Original file line numberDiff line numberDiff line change
@@ -13,20 +13,14 @@
1313
# limitations under the License.
1414

1515
import argparse
16-
import codecs
1716
import os
18-
import sys
1917

20-
import yaml
2118
import numpy as np
22-
from paddle.inference import create_predictor, PrecisionType
19+
from paddle.inference import create_predictor
2320
from paddle.inference import Config as PredictConfig
2421

25-
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
26-
sys.path.append(os.path.join(LOCAL_PATH, '..', '..'))
27-
2822
from paddleseg.utils import logger, get_image_list, progbar
29-
from infer import DeployConfig
23+
from paddleseg.deploy.infer import DeployConfig
3024
"""
3125
Load images and run the model, it collects and saves dynamic shapes,
3226
which are used in deployment with TRT.

0 commit comments

Comments
 (0)