1
1
# Production-Ready Accept Loop
2
2
3
- Production -ready accept loop needs the following things:
3
+ A production -ready accept loop needs the following things:
4
4
1 . Handling errors
5
5
2 . Limiting the number of simultanteous connections to avoid deny-of-service
6
6
(DoS) attacks
7
7
8
8
9
9
## Handling errors
10
10
11
- There are two kinds of errors in accept loop:
12
- 1 . Per-connection errors. System uses them to notify that there was a
13
- connection in the queue and it's dropped by peer. Subsequent connection
11
+ There are two kinds of errors in an accept loop:
12
+ 1 . Per-connection errors. The system uses them to notify that there was a
13
+ connection in the queue and it's dropped by the peer. Subsequent connections
14
14
can be already queued so next connection must be accepted immediately.
15
15
2 . Resource shortages. When these are encountered it doesn't make sense to
16
- accept next socket immediately. But listener stays active, so you server
16
+ accept the next socket immediately. But the listener stays active, so you server
17
17
should try to accept socket later.
18
18
19
- Here is the example of per-connection error (printed in normal and debug mode):
19
+ Here is the example of a per-connection error (printed in normal and debug mode):
20
20
```
21
21
Error: Connection reset by peer (os error 104)
22
22
Error: Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" }
@@ -30,10 +30,10 @@ Error: Os { code: 24, kind: Other, message: "Too many open files" }
30
30
31
31
### Testing Application
32
32
33
- To test your application on these errors try the following (this works
33
+ To test your application for these errors try the following (this works
34
34
on unixes only).
35
35
36
- Lower limit and start the application:
36
+ Lower limits and start the application:
37
37
```
38
38
$ ulimit -n 100
39
39
$ cargo run --example your_app
@@ -42,7 +42,7 @@ $ cargo run --example your_app
42
42
Running `target/debug/examples/your_app`
43
43
Server is listening on: http://127.0.0.1:1234
44
44
```
45
- Then in another console run [ ` wrk ` ] benchmark tool:
45
+ Then in another console run the [ ` wrk ` ] benchmark tool:
46
46
```
47
47
$ wrk -c 1000 http://127.0.0.1:1234
48
48
Running 10s test @ http://localhost:8080/
@@ -54,26 +54,26 @@ Connected to localhost.
54
54
55
55
Important is to check the following things:
56
56
57
- 1 . Application doesn't crash on error (but may log errors, see below)
57
+ 1 . The application doesn't crash on error (but may log errors, see below)
58
58
2 . It's possible to connect to the application again once load is stopped
59
59
(few seconds after ` wrk ` ). This is what ` telnet ` does in example above,
60
60
make sure it prints ` Connected to <hostname> ` .
61
61
3 . The ` Too many open files ` error is logged in the appropriate log. This
62
62
requires to set "maximum number of simultaneous connections" parameter (see
63
- below) of your application to a value greater that ` 100 ` for this example.
63
+ below) of your application to a value greater then ` 100 ` for this example.
64
64
4 . Check CPU usage of the app while doing a test. It should not occupy 100%
65
65
of a single CPU core (it's unlikely that you can exhaust CPU by 1000
66
66
connections in Rust, so this means error handling is not right).
67
67
68
68
#### Testing non-HTTP applications
69
69
70
70
If it's possible, use the appropriate benchmark tool and set the appropriate
71
- number of connections. For example ` redis-benchmark ` has ` -c ` parameter for
71
+ number of connections. For example ` redis-benchmark ` has a ` -c ` parameter for
72
72
that, if you implement redis protocol.
73
73
74
74
Alternatively, can still use ` wrk ` , just make sure that connection is not
75
75
immediately closed. If it is, put a temporary timeout before handing
76
- connection to the protocol handler, like this:
76
+ the connection to the protocol handler, like this:
77
77
78
78
``` rust,edition2018
79
79
# extern crate async_std;
@@ -147,7 +147,7 @@ Be sure to [test your application](#testing-application).
147
147
148
148
### External Crates
149
149
150
- The crate [ ` async-listen ` ] have a helper to achieve this task:
150
+ The crate [ ` async-listen ` ] has a helper to achieve this task:
151
151
``` rust,edition2018
152
152
# extern crate async_std;
153
153
# extern crate async_listen;
@@ -200,7 +200,7 @@ Even if you've applied everything described in
200
200
Let's imagine you have a server that needs to open a file to process
201
201
client request. At some point, you might encounter the following situation:
202
202
203
- 1 . There are as much client connection as max file descriptors allowed for
203
+ 1 . There are as many client connection as max file descriptors allowed for
204
204
the application.
205
205
2 . Listener gets ` Too many open files ` error so it sleeps.
206
206
3 . Some client sends a request via the previously open connection.
@@ -257,7 +257,7 @@ async fn connection_loop(_token: &Token, stream: TcpStream) { // 4
257
257
stream of ` TcpStream ` rather than ` Result ` .
258
258
2 . The token yielded by a new stream is what is counted by backpressure helper.
259
259
I.e. if you drop a token, new connection can be established.
260
- 3 . We give connection loop a reference to token to bind token's lifetime to
260
+ 3 . We give the connection loop a reference to token to bind token's lifetime to
261
261
the lifetime of the connection.
262
262
4 . The token itsellf in the function can be ignored, hence ` _token `
263
263
0 commit comments