Skip to content

Commit 9528da5

Browse files
zsdonghaowagamamaz
authored andcommitted
[pls merge] All pooling layers support data_format (#809)
* arrange examples * reinforce file * add slow * add readme * change online docs * yapf * remove images * yapf * fix docstyle * don't use import * * example --> examples * docs example --> examples, tutorial --> tutorials * fix yapf * add changelog * restore conflict vgg19 * all pooling support data_format * changelog * yapf
1 parent ecd4e6c commit 9528da5

File tree

167 files changed

+421
-155
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

167 files changed

+421
-155
lines changed

CHANGELOG.md

+6-4
Original file line numberDiff line numberDiff line change
@@ -90,16 +90,18 @@ To release a new version, please update the changelog as followed:
9090
- Release SwitchNormLayer (PR #737)
9191
- Release QuanConv2d, QuanConv2dWithBN, QuanDenseLayer, QuanDenseLayerWithBN (PR#735)
9292
- Update Core Layer to support graph (PR #751)
93+
- All Pooling layers support `data_format` (PR #809)
9394
- Setup:
9495
- Creation of installation flaggs `all_dev`, `all_cpu_dev`, and `all_gpu_dev` (PR #739)
95-
- Tutorials:
96+
- Examples:
97+
- change folder struction (PR #802)
9698
- `tutorial_models_vgg19` has been introduced to show how to use `tl.model.vgg19` (PR #698).
9799
- fix bug of `tutorial_bipedalwalker_a3c_continuous_action.py` (PR #734, Issue #732)
98100
- `tutorial_models_vgg16` and `tutorial_models_vgg19` has been changed the input scale from [0,255] to [0,1](PR #710)
99101
- `tutorial_mnist_distributed_trainer.py` and `tutorial_cifar10_distributed_trainer.py` are added to explain the uses of Distributed Trainer (PR #700)
100102
- add `tutorial_quanconv_cifar10.py` and `tutorial_quanconv_mnist.py` (PR #735)
101103
- add `tutorial_work_with_onnx.py`(PR #775)
102-
- Examples:
104+
- Applications:
103105
- [Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization](https://arxiv.org/abs/1703.06868) (PR #799)
104106

105107
### Changed
@@ -146,8 +148,8 @@ To release a new version, please update the changelog as followed:
146148
- @DEKHTIARJonathan: #739 #747 #750 #754
147149
- @lgarithm: #705 #700
148150
- @OwenLiuzZ: #698 #710 #775 #776
149-
- @zsdonghao: #711 #712 #734 #736 #737 #700 #751
150-
- @luomai: #700 #751 #766
151+
- @zsdonghao: #711 #712 #734 #736 #737 #700 #751 #809
152+
- @luomai: #700 #751 #766 #802
151153
- @XJTUWYD: #735
152154
- @mutewall: #735
153155
- @thangvubk: #759
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
## Adaptive Style Transfer in TensorFlow and TensorLayer
2+
3+
### Usage
4+
5+
1. TensorLayer implementation of the ICCV 2017 Paper [Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization](https://arxiv.org/abs/1703.06868) which supports any styles in one single model.
6+
7+
2. You can use the <b>train.py</b> script to train your own model. To train the model, you need to download [MSCOCO dataset](http://cocodataset.org/#download) and [Wikiart dataset](https://www.kaggle.com/c/painter-by-numbers), and put the dataset images under the <b>'dataset/COCO\_train\_2014'</b> folder and <b>'dataset/wiki\_all\_images'</b> folder.
8+
9+
10+
3. Alternatively, you can use the <b>test.py</b> script to run my pretrained models. My pretrained models can be downloaded from [here](https://github.com/tensorlayer/pretrained-models/tree/master/models/style_transfer_pretrained_models), and should be put into the <b>'pretrained_models'</b> folder for testing. The example images for testing can be download from [here](https://github.com/tensorlayer/pretrained-models/tree/master/models/style_transfer_models_and_examples)
11+
12+
13+
14+
### Results
15+
16+
Here are some result images (Left to Right: Content , Style , Result):
17+
18+
<div align="center">
19+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/content/content_1.png" width=250 height=250>
20+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/style/style_5.png" width=250 height=250>
21+
<img src="./images/output/style_5_content_1.jpg" width=250 height=250>
22+
</div>
23+
24+
25+
<div align="center">
26+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/content/content_2.png" width=250 height=250>
27+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/style/style11.png" width=250 height=250>
28+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/output/style_11_content2.png" width=250 height=250>
29+
</div>
30+
31+
<div align="center">
32+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/content/chicago.jpg" width=250 height=250>
33+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/style/cat.jpg" width=250 height=250>
34+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/output/cat_chicago.jpg" width=250 height=250>
35+
</div>
36+
37+
38+
39+
<div align="center">
40+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/content/lance.jpg" width=250 height=250>
41+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/style/lion.jpg" width=250 height=250>
42+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/output/lion_lance.jpg" width=250 height=250>
43+
</div>
44+
<div align="center">
45+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/content/content_4.png" width=250 height=250>
46+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/style/style_6.png" width=250 height=250>
47+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/output/style_6_content_4.jpg" width=250 height=250>
48+
</div>
49+
50+
<div align="center">
51+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/content/lance.jpg" width=250 height=250>
52+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/style/udnie.jpg" width=250 height=250>
53+
<img src="https://github.com/tensorlayer/pretrained-models/blob/master/models/style_transfer_models_and_examples/images/output/udnie_lance.jpg" width=250 height=250>
54+
</div>
55+
56+
Enjoy!
57+
58+
### License
59+
60+
- This project for academic use only.

docs/index.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,8 @@ to the library as a developer.
3131
:caption: Starting with TensorLayer
3232

3333
user/installation
34-
user/tutorial
35-
user/example
34+
user/tutorials
35+
user/examples
3636
user/contributing
3737
user/get_involved
3838
user/faq

0 commit comments

Comments
 (0)