Skip to content

Commit 0ffefd1

Browse files
committed
formatting
1 parent 0765cff commit 0ffefd1

File tree

2 files changed

+26
-24
lines changed

2 files changed

+26
-24
lines changed

README.md

+22-21
Original file line numberDiff line numberDiff line change
@@ -5,20 +5,26 @@ A simple prompt generator API for Stable Diffusion / Midjourney / Dall-e based i
55
Based on the implementation of the [FredZhang7/distilgpt2-stable-diffusion-v2](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2) model.
66

77
**Contributions are welcomed :)**
8-
## Requirements
9-
10-
The API is based in flask and uses transformers package to interact with the model.
11-
12-
To install required libraries run:
13-
14-
`pip install --upgrade transformers flask flask_restful flask_limiter`
15-
16-
## Usage
8+
## Installation and usage
179

1810
The API currently provides a POST endpoint to generate the prompt, configured to run at **/generate**
1911

20-
Run main.py and send POST requests with the following arguments in JSON.
21-
12+
1. Install the dependencies:
13+
```sh
14+
pip install --upgrade transformers flask flask_restful flask_limiter
15+
```
16+
2. Clone the code of this repository:
17+
```sh
18+
git clone https://github.com/jordip/prompt-generator-api.git
19+
```
20+
3. Run main.py from the root path:
21+
```sh
22+
python3 main.py
23+
```
24+
4. Send a POST request to your instance of the API:
25+
```sh
26+
curl http://127.0.0.1:5000/generate -H "Content-Type: application/json" -d '{"prompt":"cat with sunglasses"}' -X POST`
27+
```
2228
### Required arguments
2329

2430
- prompt
@@ -27,20 +33,15 @@ Run main.py and send POST requests with the following arguments in JSON.
2733
### Optional arguments
2834

2935
- temperature
30-
- A higher temperature will produce more diverse results, but with a higher risk of less coherent text
36+
- A higher temperature will produce more diverse results, but with a higher risk of less coherent text.
3137
- top_k
32-
- The number of tokens to sample from at each step
38+
- The number of tokens to sample from at each step.
3339
- max_length
34-
- The maximum number of tokens for the output of the model
40+
- The maximum number of tokens for the output of the model.
3541
- repetition_penalty
36-
- The penalty value for each repetition of a token
42+
- The penalty value for each repetition of a token.
3743
- num_return_sequences
38-
- The number of results to generate
39-
40-
## Sample request
41-
42-
`curl http://127.0.0.1:5000/generate -H "Content-Type: application/json" -d '{"prompt":"cat with sunglasses"}' -X POST`
43-
44+
- The number of results to generate.
4445
## Features
4546

4647
### Blacklist

main.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,7 @@
5858
app=app,
5959
default_limits=["20 per minute"]
6060
)
61+
6162
class PromptGenerator(Resource):
6263
"""Prompt Generator Class
6364
@@ -66,7 +67,7 @@ class PromptGenerator(Resource):
6667
"""
6768

6869
def validate_args(self, args):
69-
"""Validate range, set value
70+
"""Validate range, set dynamic variables value
7071
7172
Args:
7273
args dict: Arguments provided in the request
@@ -84,7 +85,7 @@ def get_blacklist(self):
8485
"""Check and load blacklist
8586
8687
Returns:
87-
list: List of terms from the blacklist dictionary.
88+
list: List of terms from the blacklist dictionary
8889
"""
8990
blacklist_filename = 'blacklist.txt'
9091
blacklist = []
@@ -101,7 +102,7 @@ def post(self):
101102
"""Post method
102103
103104
Returns:
104-
string: JSON list with the generated prompts.
105+
string: JSON list with the generated prompts
105106
"""
106107
args = parser.parse_args()
107108
self.validate_args(args)

0 commit comments

Comments
 (0)