Skip to content

Commit 5433ae8

Browse files
committed
feat(lite): add FaceParsingBiSeNet model ORT/MNN C++ (#332)
1 parent a6428c2 commit 5433ae8

17 files changed

+130
-34
lines changed

README.md

+42-5
Original file line numberDiff line numberDiff line change
@@ -263,8 +263,8 @@ static void test_default()
263263
| [SubPixelCNN](https://github.com/niazwazir/SUB_PIXEL_CNN) | 234K | *resolution* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_subpixel_cnn.cpp) ||| / ||| ✔️ | ✔️ ||
264264
| [SubPixelCNN](https://github.com/niazwazir/SUB_PIXEL_CNN) | 234K | *resolution* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_subpixel_cnn.cpp) ||| / ||| ✔️ | ✔️ ||
265265
| [InsectDet](https://github.com/quarrying/quarrying-insect-id) | 27M | *detection* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_insectdet.cpp) ||| / ||| ✔️ | ✔️ ||
266-
| [InsectID](https://github.com/quarrying/quarrying-insect-id) | 22M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_insectid.cpp) |||||| | ✔️ | ✔️ ||
267-
| [PlantID](https://github.com/quarrying/quarrying-plant-id) | 30M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_plantid.cpp) |||||| | ✔️ | ✔️ ||
266+
| [InsectID](https://github.com/quarrying/quarrying-insect-id) | 22M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_insectid.cpp) |||||| ✔️ | ✔️ | ✔️ ||
267+
| [PlantID](https://github.com/quarrying/quarrying-plant-id) | 30M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_plantid.cpp) |||||| ✔️ | ✔️ | ✔️ ||
268268
| [YOLOv5BlazeFace](https://github.com/deepcam-cn/yolov5-face) | 3.4M | *face::detect* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_yolov5_blazeface.cpp) ||| / | / || ✔️ | ✔️ ||
269269
| [YoloV5_V_6_1](https://github.com/ultralytics/yolov5/releases/tag/v6.1) | 7.5M | *detection* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_yolov5_v6.1.cpp) ||| / | / || ✔️ | ✔️ ||
270270
| [HeadSeg](https://github.com/minivision-ai/photo2cartoon) | 31M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_head_seg.cpp) ||| / ||| ✔️ | ✔️ ||
@@ -277,6 +277,8 @@ static void test_default()
277277
| [MobileHumanMatting](https://github.com/lizhengwei1992/mobile_phone_human_matting) | 3M | *matting* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_mobile_human_matting.cpp) ||| / | / || ✔️ | ✔️ ||
278278
| [MobileHairSeg](https://github.com/wonbeomjang/mobile-hair-segmentation-pytorch) | 14M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_mobile_hair_seg.cpp) ||| / | / || ✔️ | ✔️ ||
279279
| [YOLOv6](https://github.com/meituan/YOLOv6) | 17M | *detection* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_yolov6.cpp) |||||| ✔️ | ✔️ ||
280+
| [FaceParsingBiSeNet](https://github.com/zllrunning/face-parsing.PyTorch) | 50M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_face_parsing_bisenet.cpp) |||||| ✔️ | ✔️ ||
281+
| [FaceParsingBiSeNetDyn](https://github.com/zllrunning/face-parsing.PyTorch) | 50M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_face_parsing_bisenet_dyn.cpp) || / | / | / | / | ✔️ | ✔️ ||
280282

281283

282284
## 4. Build Docs.
@@ -880,8 +882,6 @@ static void test_default()
880882
lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
881883
cv::imwrite(save_img_path, img_bgr);
882884

883-
std::cout << "Default Version Done! Detected Face Num: " << detected_boxes.size() << std::endl;
884-
885885
delete scrfd;
886886
}
887887
```
@@ -973,7 +973,6 @@ static void test_default()
973973
ssrnet->detect(img_bgr, age);
974974
lite::utils::draw_age_inplace(img_bgr, age);
975975
cv::imwrite(save_img_path, img_bgr);
976-
std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;
977976

978977
delete ssrnet;
979978
}
@@ -1227,6 +1226,44 @@ More classes for photo style transfer.
12271226
auto *transfer = new lite::cv::style::FemalePhoto2Cartoon(onnx_path);
12281227
```
12291228

1229+
****
1230+
1231+
#### Example13: Face Parsing using [FaceParsing](https://github.com/zllrunning/face-parsing.PyTorch). Download model from Model-Zoo[<sup>2</sup>](#lite.ai.toolkit-2).
1232+
```c++
1233+
#include "lite/lite.h"
1234+
1235+
static void test_default()
1236+
{
1237+
std::string onnx_path = "../../../hub/onnx/cv/face_parsing_512x512.onnx";
1238+
std::string test_img_path = "../../../examples/lite/resources/test_lite_face_parsing.png";
1239+
std::string save_img_path = "../../../logs/test_lite_face_parsing_bisenet.jpg";
1240+
1241+
auto *face_parsing_bisenet = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path, 8); // 8 threads
1242+
1243+
lite::types::FaceParsingContent content;
1244+
cv::Mat img_bgr = cv::imread(test_img_path);
1245+
face_parsing_bisenet->detect(img_bgr, content);
1246+
1247+
if (content.flag && !content.merge.empty())
1248+
cv::imwrite(save_img_path, content.merge);
1249+
1250+
delete face_parsing_bisenet;
1251+
}
1252+
```
1253+
The output is:
1254+
1255+
<div align='center'>
1256+
<img src='docs/resources/face_parsing.png' height="180px" width="180px">
1257+
<img src='docs/resources/face_parsing_merge.jpg' height="180px" width="180px">
1258+
<img src='docs/resources/face_parsing_1.png' height="180px" width="180px">
1259+
<img src='docs/resources/face_parsing_1_merge.jpg' height="180px" width="180px">
1260+
</div>
1261+
1262+
More classes for face parsing (hair, eyes, nose, mouth, others)
1263+
```c++
1264+
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50Mb
1265+
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference.
1266+
```
12301267

12311268
## 7. License.
12321269

README.zh.md

+45-6
Original file line numberDiff line numberDiff line change
@@ -266,8 +266,8 @@ static void test_default()
266266
| [SubPixelCNN](https://github.com/niazwazir/SUB_PIXEL_CNN) | 234K | *resolution* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_subpixel_cnn.cpp) ||| / ||| ✔️ | ✔️ ||
267267
| [SubPixelCNN](https://github.com/niazwazir/SUB_PIXEL_CNN) | 234K | *resolution* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_subpixel_cnn.cpp) ||| / ||| ✔️ | ✔️ ||
268268
| [InsectDet](https://github.com/quarrying/quarrying-insect-id) | 27M | *detection* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_insectdet.cpp) ||| / ||| ✔️ | ✔️ ||
269-
| [InsectID](https://github.com/quarrying/quarrying-insect-id) | 22M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_insectid.cpp) |||||| | ✔️ | ✔️ ||
270-
| [PlantID](https://github.com/quarrying/quarrying-plant-id) | 30M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_plantid.cpp) |||||| | ✔️ | ✔️ ||
269+
| [InsectID](https://github.com/quarrying/quarrying-insect-id) | 22M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_insectid.cpp) |||||| ✔️ | ✔️ | ✔️ ||
270+
| [PlantID](https://github.com/quarrying/quarrying-plant-id) | 30M | *classification* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_plantid.cpp) |||||| ✔️ | ✔️ | ✔️ ||
271271
| [YOLOv5BlazeFace](https://github.com/deepcam-cn/yolov5-face) | 3.4M | *face::detect* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_yolov5_blazeface.cpp) ||| / | / || ✔️ | ✔️ ||
272272
| [YoloV5_V_6_1](https://github.com/ultralytics/yolov5/releases/tag/v6.1) | 7.5M | *detection* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_yolov5_v6.1.cpp) ||| / | / || ✔️ | ✔️ ||
273273
| [HeadSeg](https://github.com/minivision-ai/photo2cartoon) | 31M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_head_seg.cpp) ||| / ||| ✔️ | ✔️ ||
@@ -280,6 +280,8 @@ static void test_default()
280280
| [MobileHumanMatting](https://github.com/lizhengwei1992/mobile_phone_human_matting) | 3M | *matting* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_mobile_human_matting.cpp) ||| / | / || ✔️ | ✔️ ||
281281
| [MobileHairSeg](https://github.com/wonbeomjang/mobile-hair-segmentation-pytorch) | 14M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_mobile_hair_seg.cpp) ||| / | / || ✔️ | ✔️ ||
282282
| [YOLOv6](https://github.com/meituan/YOLOv6) | 17M | *detection* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_yolov6.cpp) |||||| ✔️ | ✔️ ||
283+
| [FaceParsingBiSeNet](https://github.com/zllrunning/face-parsing.PyTorch) | 50M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_face_parsing_bisenet.cpp) |||||| ✔️ | ✔️ ||
284+
| [FaceParsingBiSeNetDyn](https://github.com/zllrunning/face-parsing.PyTorch) | 50M | *segmentation* | [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_face_parsing_bisenet_dyn.cpp) || / | / | / | / | ✔️ | ✔️ ||
283285

284286

285287
## 4. 编译文档
@@ -877,8 +879,6 @@ static void test_default()
877879
lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
878880
cv::imwrite(save_img_path, img_bgr);
879881

880-
std::cout << "Default Version Done! Detected Face Num: " << detected_boxes.size() << std::endl;
881-
882882
delete scrfd;
883883
}
884884
```
@@ -970,7 +970,6 @@ static void test_default()
970970
ssrnet->detect(img_bgr, age);
971971
lite::utils::draw_age_inplace(img_bgr, age);
972972
cv::imwrite(save_img_path, img_bgr);
973-
std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;
974973

975974
delete ssrnet;
976975
}
@@ -1219,12 +1218,52 @@ static void test_default()
12191218
<img src='docs/resources/female_photo2cartoon_cartoon_1_out.jpg' height="180px" width="180px">
12201219
</div>
12211220

1222-
更多的人像风格化模型
1221+
更多的人像风格化模型
12231222
```c++
12241223
auto *transfer = new lite::cv::style::FemalePhoto2Cartoon(onnx_path);
1224+
```
1225+
1226+
****
1227+
1228+
#### Example13: 使用 [FaceParsing](https://github.com/zllrunning/face-parsing.PyTorch) 进行五官分割. 请从Model-Zoo[<sup>2</sup>](#lite.ai.toolkit-2) 下载模型文件。
1229+
```c++
1230+
#include "lite/lite.h"
1231+
1232+
static void test_default()
1233+
{
1234+
std::string onnx_path = "../../../hub/onnx/cv/face_parsing_512x512.onnx";
1235+
std::string test_img_path = "../../../examples/lite/resources/test_lite_face_parsing.png";
1236+
std::string save_img_path = "../../../logs/test_lite_face_parsing_bisenet.jpg";
1237+
1238+
auto *face_parsing_bisenet = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path, 8); // 8 threads
1239+
1240+
lite::types::FaceParsingContent content;
1241+
cv::Mat img_bgr = cv::imread(test_img_path);
1242+
face_parsing_bisenet->detect(img_bgr, content);
1243+
1244+
if (content.flag && !content.merge.empty())
1245+
cv::imwrite(save_img_path, content.merge);
1246+
1247+
delete face_parsing_bisenet;
1248+
}
1249+
```
1250+
输出的结果是:
1251+
1252+
<div align='center'>
1253+
<img src='docs/resources/face_parsing.png' height="180px" width="180px">
1254+
<img src='docs/resources/face_parsing_merge.jpg' height="180px" width="180px">
1255+
<img src='docs/resources/face_parsing_1.png' height="180px" width="180px">
1256+
<img src='docs/resources/face_parsing_1_merge.jpg' height="180px" width="180px">
1257+
</div>
1258+
1259+
更多的进行五官分割的模型 (hair, eyes, nose, mouth, others):
1260+
```c++
1261+
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50Mb
1262+
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference.
12251263
```
12261264

12271265

1266+
12281267
## 7. 开源协议
12291268

12301269
<div id="lite.ai.toolkit-License"></div>

docs/hub/lite.ai.toolkit.hub.mnn.md

+2
Original file line numberDiff line numberDiff line change
@@ -290,6 +290,8 @@ You can download all the pretrained models files of MNN format from ([Baidu Driv
290290
| *lite::mnn::cv::segmentation::HairSeg* | hairseg_224x224.mnn | [mobile-semantic-seg](https://github.com/akirasosa/mobile-semantic-segmentation) | 18M |
291291
| *lite::mnn::cv::segmentation::MobileHairSeg* | mobile_hair_seg_hairmattenetv1_224x224.mnn | [mobile-hair...](https://github.com/wonbeomjang/mobile-hair-segmentation-pytorch) | 14M |
292292
| *lite::mnn::cv::segmentation::MobileHairSeg* | mobile_hair_seg_hairmattenetv2_224x224.mnn | [mobile-hair...](https://github.com/wonbeomjang/mobile-hair-segmentation-pytorch) | 14M |
293+
| *lite::mnn::cv::segmentation::FaceParsingBiSeNet* | face_parsing_512x512.mnn | [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) | 50M |
294+
| *lite::mnn::cv::segmentation::FaceParsingBiSeNet* | face_parsing_1024x1024.mnn | [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) | 50M |
293295

294296

295297
## Style Transfer.

docs/hub/lite.ai.toolkit.hub.ncnn.md

+6-4
Original file line numberDiff line numberDiff line change
@@ -230,10 +230,12 @@ You can download all the pretrained models files of NCNN format from ([Baidu Dri
230230
<div id="lite.ai.toolkit.hub.ncnn-segmentation"></div>
231231

232232

233-
| Class | Pretrained NCNN Files | Rename or Converted From (Repo) | Size |
234-
|:--------------------------------------------------:|:--------------------------------------:|:------------------------------------------------:|:-----:|
235-
| *lite::ncnn::cv::segmentation::DeepLabV3ResNet101* | deeplabv3_resnet101_coco.opt.param&bin | [torchvision](https://github.com/pytorch/vision) | 232Mb |
236-
| *lite::ncnn::cv::segmentation::FCNResNet101* | fcn_resnet101.opt.param&bin | [torchvision](https://github.com/pytorch/vision) | 207Mb |
233+
| Class | Pretrained NCNN Files | Rename or Converted From (Repo) | Size |
234+
|:--------------------------------------------------:|:--------------------------------------:|:--------------------------------------------------------------------------:|:-----:|
235+
| *lite::ncnn::cv::segmentation::DeepLabV3ResNet101* | deeplabv3_resnet101_coco.opt.param&bin | [torchvision](https://github.com/pytorch/vision) | 232Mb |
236+
| *lite::ncnn::cv::segmentation::FCNResNet101* | fcn_resnet101.opt.param&bin | [torchvision](https://github.com/pytorch/vision) | 207Mb |
237+
| *lite::ncnn::cv::segmentation::FaceParsingBiSeNet* | face_parsing_512x512.opt.param&bin | [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) | 50M |
238+
| *lite::ncnn::cv::segmentation::FaceParsingBiSeNet* | face_parsing_1024x1024.opt.param&bin | [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) | 50M |
237239

238240

239241
## Style Transfer.

docs/hub/lite.ai.toolkit.hub.onnx.md

+3
Original file line numberDiff line numberDiff line change
@@ -333,6 +333,9 @@ You can download all the pretrained models files of ONNX format from ([Baidu Dri
333333
| *lite::cv::segmentation::HairSeg* | hairseg_224x224.onnx | [mobile-semantic-seg](https://github.com/akirasosa/mobile-semantic-segmentation) | 18M |
334334
| *lite::cv::segmentation::MobileHairSeg* | mobile_hair_seg_hairmattenetv1_224x224.onnx | [mobile-hair...](https://github.com/wonbeomjang/mobile-hair-segmentation-pytorch) | 14M |
335335
| *lite::cv::segmentation::MobileHairSeg* | mobile_hair_seg_hairmattenetv2_224x224.onnx | [mobile-hair...](https://github.com/wonbeomjang/mobile-hair-segmentation-pytorch) | 14M |
336+
| *lite::cv::segmentation::FaceParsingBiSeNet* | face_parsing_512x512.onnx | [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) | 50M |
337+
| *lite::cv::segmentation::FaceParsingBiSeNet* | face_parsing_1024x1024.onnx | [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) | 50M |
338+
| *lite::cv::segmentation::FaceParsingBiSeNetDyn* | face_parsing_dynamic.onnx | [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) | 50M |
336339

337340

338341
## Style Transfer.

0 commit comments

Comments
 (0)