You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -9,16 +9,16 @@
9
9
10
10
11
11
This repository is the official PyTorch implementation of Mutual Affine Network for ***Spatially Variant*** Kernel Estimation in Blind Image Super-Resolution
- Aug. 7, 2021: We add an [online Colab demo for MANet kernel estimation <ahref="https://colab.research.google.com/gist/JingyunLiang/4ed2524d6e08343710ee408a4d997e1c/manet-demo-on-spatially-variant-kernel-estimation.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="google colab logo"></a>](https://colab.research.google.com/gist/JingyunLiang/4ed2524d6e08343710ee408a4d997e1c/manet-demo-on-spatially-variant-kernel-estimation.ipynb)
16
16
- Sep.06, 2021: See our recent work [SwinIR: Transformer-based image restoration](https://github.com/JingyunLiang/SwinIR).[](https://arxiv.org/abs/2108.10257)[](https://github.com/JingyunLiang/SwinIR)[](https://github.com/JingyunLiang/SwinIR/releases)[ <ahref="https://colab.research.google.com/gist/JingyunLiang/a5e3e54bc9ef8d7bf594f6fee8208533/swinir-demo-on-real-world-image-sr.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="google colab logo"></a>](https://colab.research.google.com/gist/JingyunLiang/a5e3e54bc9ef8d7bf594f6fee8208533/swinir-demo-on-real-world-image-sr.ipynb)
17
-
- Aug. 17, 2021: See our previous spatially invariant kernel estimation work: *[Flow-based Kernel Prior with Application to Blind Super-Resolution (FKP), CVPR2021](https://github.com/JingyunLiang/FKP)[](https://arxiv.org/abs/2103.15977)
17
+
- Aug. 17, 2021: See our previous work on [blind SR: Flow-based Kernel Prior with Application to Blind Super-Resolution (FKP), CVPR2021](https://github.com/JingyunLiang/FKP)[](https://arxiv.org/abs/2103.15977)
- Aug. 17, 2021: See our recent work for flow-based image SR: [Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow), ICCV2021](https://github.com/JingyunLiang/HCFlow)[](https://arxiv.org/abs/2108.05301)
19
+
- Aug. 17, 2021: See our recent work for [generative modelling of image SR: Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow), ICCV2021](https://github.com/JingyunLiang/HCFlow)[](https://arxiv.org/abs/2108.05301)
- Aug. 17, 2021: See our recent work for real-world image SR: [Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (BSRGAN), ICCV2021](https://github.com/cszn/BSRGAN)[](https://arxiv.org/abs/2103.14006)
21
+
- Aug. 17, 2021: See our recent work for [real-world image SR: Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (BSRGAN), ICCV2021](https://github.com/cszn/BSRGAN)[](https://arxiv.org/abs/2103.14006)
@@ -39,7 +39,7 @@ Note: this repository is based on [BasicSR](https://github.com/xinntao/BasicSR#m
39
39
40
40
41
41
## Quick Run
42
-
Download `stage3_MANet+RRDB_x4.pth` from [release](https://github.com/JingyunLiang/MANet/releases/tag/v0.0) and put it in `./pretrained_models`. Then, run following command. Or you can go to our [online Colab demo for MANet kernel estimation <ahref="https://colab.research.google.com/gist/JingyunLiang/4ed2524d6e08343710ee408a4d997e1c/manet-demo-on-spatially-variant-kernel-estimation.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="google colab logo"></a>](https://colab.research.google.com/gist/JingyunLiang/4ed2524d6e08343710ee408a4d997e1c/manet-demo-on-spatially-variant-kernel-estimation.ipynb) to have a try.
42
+
Download `stage3_MANet+RRDB_x4.pth` from [release](https://github.com/JingyunLiang/MANet/releases) and put it in `./pretrained_models`. Then, run following command. Or you can go to our [online Colab demo for MANet kernel estimation <ahref="https://colab.research.google.com/gist/JingyunLiang/4ed2524d6e08343710ee408a4d997e1c/manet-demo-on-spatially-variant-kernel-estimation.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="google colab logo"></a>](https://colab.research.google.com/gist/JingyunLiang/4ed2524d6e08343710ee408a4d997e1c/manet-demo-on-spatially-variant-kernel-estimation.ipynb) to have a try.
43
43
```bash
44
44
cd codes
45
45
python test.py --opt options/test/test_stage3.yml
@@ -70,7 +70,7 @@ Step3: to fine-tune RRDB with MANet, run this command:
All trained models can be downloaded from [release](https://github.com/JingyunLiang/MANet/releases/tag/v0.0). For testing, downloading stage3 models is enough.
73
+
All trained models can be downloaded from [release](https://github.com/JingyunLiang/MANet/releases). For testing, downloading stage3 models is enough.
We conducted experiments on both spatially variant and invariant blind SR. Please refer to the [paper](https://arxiv.org/abs/2108.05302) and [supp](https://github.com/JingyunLiang/MANet/releases/tag/v0.0) for results.
110
+
We conducted experiments on both spatially variant and invariant blind SR. Please refer to the [paper](https://arxiv.org/abs/2108.05302) and [supp](https://github.com/JingyunLiang/MANet/releases) for results.
0 commit comments