Skip to content

Commit ee9d147

Browse files
committed
Reduce line lengths
1 parent 2987d21 commit ee9d147

File tree

13 files changed

+133
-38
lines changed

13 files changed

+133
-38
lines changed

Diff for: .editorconfig

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
root = true
2+
3+
# Unix-style newlines with a newline ending every file
4+
[*]
5+
end_of_line = lf
6+
insert_final_newline = true
7+
max_line_length = 120
8+
9+
[*.md]
10+
max_line_length = 300

Diff for: CHANGELOG.md

+15-5
Original file line numberDiff line numberDiff line change
@@ -130,11 +130,21 @@ change.
130130

131131
# [1.8.0] - 2020-12-04
132132

133-
This patch introduces `async_std::channel`, a new submodule for our async channels implementation. `channels` have been one of async-std's most requested features, and have existed as "unstable" for the past year. We've been cautious about stabilizing channels, and this caution turned out to be warranted: we realized our channels could hang indefinitely under certain circumstances, and people ended up expressing a need for unbounded channels.
134-
135-
So today we're introducing the new `async_std::channel` submodule which exports the `async-channel` crate, and we're marking the older unstable `async_std::sync::channel` API as "deprecated". This release includes both APIs, but we intend to stabilize `async_std::channel` and remove the older API in January. This should give dependent projects a month to upgrade, though we can extend that if it proves to be too short.
136-
137-
The rationale for adding a new top-level `channel` submodule, rather than extending `sync` is that the `std::sync` and `async_std::sync` submodule are a bit of a mess, and the libs team [has been talking about splitting `std::sync` up]([https://github.com/rust-lang/rfcs/pull/2788#discussion_r339092478](https://github.com/rust-lang/rfcs/pull/2788#discussion_r339092478)) into separate modules. The stdlib has to guarantee it'll forever be backwards compatible, but `async-std` does not (we fully expect a 2.0 once we have async closures & traits). So we're experimenting with this change before `std` does, with the expectation that this change can serve as a data point when the libs team decides how to proceed in std.
133+
This patch introduces `async_std::channel`, a new submodule for our async channels implementation. `channels` have been
134+
one of async-std's most requested features, and have existed as "unstable" for the past year. We've been cautious about
135+
stabilizing channels, and this caution turned out to be warranted: we realized our channels could hang indefinitely
136+
under certain circumstances, and people ended up expressing a need for unbounded channels.
137+
138+
So today we're introducing the new `async_std::channel` submodule which exports the `async-channel` crate, and we're
139+
marking the older unstable `async_std::sync::channel` API as "deprecated". This release includes both APIs, but we
140+
intend to stabilize `async_std::channel` and remove the older API in January. This should give dependent projects a
141+
month to upgrade, though we can extend that if it proves to be too short.
142+
143+
The rationale for adding a new top-level `channel` submodule, rather than extending `sync` is that the `std::sync` and
144+
`async_std::sync` submodule are a bit of a mess, and the libs team [has been talking about splitting `std::sync` up]([https://github.com/rust-lang/rfcs/pull/2788#discussion_r339092478](https://github.com/rust-lang/rfcs/pull/2788#discussion_r339092478))
145+
into separate modules. The stdlib has to guarantee it'll forever be backwards compatible, but `async-std` does not
146+
(we fully expect a 2.0 once we have async closures & traits). So we're experimenting with this change before `std`
147+
does, with the expectation that this change can serve as a data point when the libs team decides how to proceed in std.
138148

139149
### Added
140150

Diff for: docs/src/concepts.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@
44

55
However, there are good reasons for that perception. Futures have three concepts at their base that seem to be a constant source of confusion: deferred computation, asynchronicity and independence of execution strategy.
66

7-
These concepts are not hard, but something many people are not used to. This base confusion is amplified by many implementations oriented on details. Most explanations of these implementations also target advanced users, and can be hard for beginners. We try to provide both easy-to-understand primitives and approachable overviews of the concepts.
7+
These concepts are not hard, but something many people are not used to. This base confusion is amplified by many
8+
implementations oriented on details. Most explanations of these implementations also target advanced users, and can
9+
be hard for beginners. We try to provide both easy-to-understand primitives and approachable overviews of the concepts.
810

911
Futures are a concept that abstracts over how code is run. By themselves, they do nothing. This is a weird concept in an imperative language, where usually one thing happens after the other - right now.
1012

Diff for: docs/src/concepts/futures.md

+50-14
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,60 @@
11
# Futures
22

3-
A notable point about Rust is [*fearless concurrency*](https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.html). That is the notion that you should be empowered to do concurrent things, without giving up safety. Also, Rust being a low-level language, it's about fearless concurrency *without picking a specific implementation strategy*. This means we *must* abstract over the strategy, to allow choice *later*, if we want to have any way to share code between users of different strategies.
3+
A notable point about Rust is [*fearless concurrency*](https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.html).
4+
That is the notion that you should be empowered to do concurrent things, without giving up safety. Also, Rust being a
5+
low-level language, it's about fearless concurrency *without picking a specific implementation strategy*. This means we
6+
*must* abstract over the strategy, to allow choice *later*, if we want to have any way to share code between users of
7+
different strategies.
48

5-
Futures abstract over *computation*. They describe the "what", independent of the "where" and the "when". For that, they aim to break code into small, composable actions that can then be executed by a part of our system. Let's take a tour through what it means to compute things to find where we can abstract.
9+
Futures abstract over *computation*. They describe the "what", independent of the "where" and the "when". For that,
10+
they aim to break code into small, composable actions that can then be executed by a part of our system. Let's take a
11+
tour through what it means to compute things to find where we can abstract.
612

713
## Send and Sync
814

9-
Luckily, concurrent Rust already has two well-known and effective concepts abstracting over sharing between concurrent parts of a program: `Send` and `Sync`. Notably, both the `Send` and `Sync` traits abstract over *strategies* of concurrent work, compose neatly, and don't prescribe an implementation.
15+
Luckily, concurrent Rust already has two well-known and effective concepts abstracting over sharing between concurrent
16+
parts of a program: `Send` and `Sync`. Notably, both the `Send` and `Sync` traits abstract over *strategies* of
17+
concurrent work, compose neatly, and don't prescribe an implementation.
1018

1119
As a quick summary:
1220

13-
- `Send` abstracts over *passing data* in a computation to another concurrent computation (let's call it the receiver), losing access to it on the sender side. In many programming languages, this strategy is commonly implemented, but missing support from the language side, and expects you to enforce the "losing access" behaviour yourself. This is a regular source of bugs: senders keeping handles to sent things around and maybe even working with them after sending. Rust mitigates this problem by making this behaviour known. Types can be `Send` or not (by implementing the appropriate marker trait), allowing or disallowing sending them around, and the ownership and borrowing rules prevent subsequent access.
14-
15-
- `Sync` is about *sharing data* between two concurrent parts of a program. This is another common pattern: as writing to a memory location or reading while another party is writing is inherently unsafe, this access needs to be moderated through synchronisation.[^1] There are many common ways for two parties to agree on not using the same part in memory at the same time, for example mutexes and spinlocks. Again, Rust gives you the option of (safely!) not caring. Rust gives you the ability to express that something *needs* synchronisation while not being specific about the *how*.
16-
17-
Note how we avoided any word like *"thread"*, but instead opted for "computation". The full power of `Send` and `Sync` is that they relieve you of the burden of knowing *what* shares. At the point of implementation, you only need to know which method of sharing is appropriate for the type at hand. This keeps reasoning local and is not influenced by whatever implementation the user of that type later uses.
21+
- `Send` abstracts over *passing data* in a computation to another concurrent computation (let's call it the receiver),
22+
losing access to it on the sender side. In many programming languages, this strategy is commonly implemented, but
23+
missing support from the language side, and expects you to enforce the "losing access" behaviour yourself.
24+
This is a regular source of bugs: senders keeping handles to sent things around and maybe even working with them
25+
after sending. Rust mitigates this problem by making this behaviour known. Types can be `Send` or not
26+
(by implementing the appropriate marker trait), allowing or disallowing sending them around, and the ownership and
27+
borrowing rules prevent subsequent access.
28+
29+
- `Sync` is about *sharing data* between two concurrent parts of a program. This is another common pattern: as writing
30+
to a memory location or reading while another party is writing is inherently unsafe, this access needs to be
31+
moderated through synchronisation.[^1] There are many common ways for two parties to agree on not using the same part
32+
in memory at the same time, for example mutexes and spinlocks. Again, Rust gives you the option of (safely!) not
33+
caring. Rust gives you the ability to express that something *needs* synchronisation while not being specific about
34+
the *how*.
35+
36+
Note how we avoided any word like *"thread"*, but instead opted for "computation". The full power of `Send` and `Sync`
37+
is that they relieve you of the burden of knowing *what* shares. At the point of implementation, you only need to know
38+
which method of sharing is appropriate for the type at hand. This keeps reasoning local and is not influenced by
39+
whatever implementation the user of that type later uses.
1840

1941
`Send` and `Sync` can be composed in interesting fashions, but that's beyond the scope here. You can find examples in the [Rust Book][rust-book-sync].
2042

2143
[rust-book-sync]: https://doc.rust-lang.org/stable/book/ch16-04-extensible-concurrency-sync-and-send.html
2244

23-
To sum up: Rust gives us the ability to safely abstract over important properties of concurrent programs, their data sharing. It does so in a very lightweight fashion; the language itself only knows about the two markers `Send` and `Sync` and helps us a little by deriving them itself, when possible. The rest is a library concern.
45+
To sum up: Rust gives us the ability to safely abstract over important properties of concurrent programs, their data
46+
sharing. It does so in a very lightweight fashion; the language itself only knows about the two markers `Send` and
47+
`Sync` and helps us a little by deriving them itself, when possible. The rest is a library concern.
2448

2549
## An easy view of computation
2650

2751
While computation is a subject to write a whole [book](https://computationbook.com/) about, a very simplified view suffices for us: A sequence of composable operations which can branch based on a decision, run to succession and yield a result or yield an error
2852

2953
## Deferring computation
3054

31-
As mentioned above, `Send` and `Sync` are about data. But programs are not only about data, they also talk about *computing* the data. And that's what [`Futures`][futures] do. We are going to have a close look at how that works in the next chapter. Let's look at what Futures allow us to express, in English. Futures go from this plan:
55+
As mentioned above, `Send` and `Sync` are about data. But programs are not only about data, they also talk about *computing*
56+
the data. And that's what [`Futures`][futures] do. We are going to have a close look at how that works in the next chapter.
57+
Let's look at what Futures allow us to express, in English. Futures go from this plan:
3258

3359
- Do X
3460
- If X succeeded, do Y
@@ -73,7 +99,9 @@ fn read_file(path: &str) -> io::Result<String> {
7399
}
74100
```
75101

76-
Speaking in terms of time, we can only take action *before* calling the function or *after* the function returned. This is not desirable, as it takes from us the ability to do something *while* it runs. When working with parallel code, this would take from us the ability to start a parallel task while the first runs (because we gave away control).
102+
Speaking in terms of time, we can only take action *before* calling the function or *after* the function returned.
103+
This is not desirable, as it takes from us the ability to do something *while* it runs. When working with parallel
104+
code, this would take from us the ability to start a parallel task while the first runs (because we gave away control).
77105

78106
This is the moment where we could reach for [threads](https://en.wikipedia.org/wiki/Thread_). But threads are a very specific concurrency primitive and we said that we are searching for an abstraction.
79107

@@ -124,9 +152,17 @@ This `async` function sets up a deferred computation. When this function is call
124152

125153
## What does `.await` do?
126154

127-
The `.await` postfix does exactly what it says on the tin: the moment you use it, the code will wait until the requested action (e.g. opening a file or reading all data in it) is finished. The `.await?` is not special, it's just the application of the `?` operator to the result of `.await`. So, what is gained over the initial code example? We're getting futures and then immediately waiting for them?
128-
129-
The `.await` points act as a marker. Here, the code will wait for a `Future` to produce its value. How will a future finish? You don't need to care! The marker allows the component (usually called the “runtime”) in charge of *executing* this piece of code to take care of all the other things it has to do while the computation finishes. It will come back to this point when the operation you are doing in the background is done. This is why this style of programming is also called *evented programming*. We are waiting for *things to happen* (e.g. a file to be opened) and then react (by starting to read).
155+
The `.await` postfix does exactly what it says on the tin: the moment you use it, the code will wait until the
156+
requested action (e.g. opening a file or reading all data in it) is finished. The `.await?` is not special, it's just
157+
the application of the `?` operator to the result of `.await`. So, what is gained over the initial code example? We're
158+
getting futures and then immediately waiting for them?
159+
160+
The `.await` points act as a marker. Here, the code will wait for a `Future` to produce its value. How will a future
161+
finish? You don't need to care! The marker allows the component (usually called the “runtime”) in charge of *executing*
162+
this piece of code to take care of all the other things it has to do while the computation finishes. It will come back
163+
to this point when the operation you are doing in the background is done. This is why this style of programming is also
164+
called *evented programming*. We are waiting for *things to happen* (e.g. a file to be opened) and then react
165+
(by starting to read).
130166

131167
When executing 2 or more of these functions at the same time, our runtime system is then able to fill the wait time with handling *all the other events* currently going on.
132168

Diff for: docs/src/concepts/tasks.md

+17-4
Original file line numberDiff line numberDiff line change
@@ -61,11 +61,20 @@ But let's get to the interesting part:
6161
task::spawn(async { });
6262
```
6363

64-
`spawn` takes a `Future` and starts running it on a `Task`. It returns a `JoinHandle`. Futures in Rust are sometimes called *cold* Futures. You need something that starts running them. To run a Future, there may be some additional bookkeeping required, e.g. whether it's running or finished, where it is being placed in memory and what the current state is. This bookkeeping part is abstracted away in a `Task`.
64+
`spawn` takes a `Future` and starts running it on a `Task`. It returns a `JoinHandle`. Futures in Rust are sometimes
65+
called *cold* Futures. You need something that starts running them. To run a Future, there may be some additional
66+
bookkeeping required, e.g. whether it's running or finished, where it is being placed in memory and what the current
67+
state is. This bookkeeping part is abstracted away in a `Task`.
6568

66-
A `Task` is similar to a `Thread`, with some minor differences: it will be scheduled by the program instead of the operating system kernel, and if it encounters a point where it needs to wait, the program itself is responsible for waking it up again. We'll talk a little bit about that later. An `async_std` task can also have a name and an ID, just like a thread.
69+
A `Task` is similar to a `Thread`, with some minor differences: it will be scheduled by the program instead of the
70+
operating system kernel, and if it encounters a point where it needs to wait, the program itself is responsible for
71+
waking it up again. We'll talk a little bit about that later. An `async_std` task can also have a name and an ID,
72+
just like a thread.
6773

68-
For now, it is enough to know that once you have `spawn`ed a task, it will continue running in the background. The `JoinHandle` is itself a future that will finish once the `Task` has run to conclusion. Much like with `threads` and the `join` function, we can now call `block_on` on the handle to *block* the program (or the calling thread, to be specific) and wait for it to finish.
74+
For now, it is enough to know that once you have `spawn`ed a task, it will continue running in the background.
75+
The `JoinHandle` is itself a future that will finish once the `Task` has run to conclusion. Much like with `threads`
76+
and the `join` function, we can now call `block_on` on the handle to *block* the program (or the calling thread, to be
77+
specific) and wait for it to finish.
6978

7079
## Tasks in `async_std`
7180

@@ -80,7 +89,11 @@ Tasks in `async_std` are one of the core abstractions. Much like Rust's `thread`
8089

8190
## Blocking
8291

83-
`Task`s are assumed to run _concurrently_, potentially by sharing a thread of execution. This means that operations blocking an _operating system thread_, such as `std::thread::sleep` or io function from Rust's `std` library will _stop execution of all tasks sharing this thread_. Other libraries (such as database drivers) have similar behaviour. Note that _blocking the current thread_ is not in and of itself bad behaviour, just something that does not mix well with the concurrent execution model of `async-std`. Essentially, never do this:
92+
`Task`s are assumed to run _concurrently_, potentially by sharing a thread of execution. This means that operations
93+
blocking an _operating system thread_, such as `std::thread::sleep` or io function from Rust's `std` library will
94+
_stop execution of all tasks sharing this thread_. Other libraries (such as database drivers) have similar behaviour.
95+
Note that _blocking the current thread_ is not in and of itself bad behaviour, just something that does not mix well
96+
with the concurrent execution model of `async-std`. Essentially, never do this:
8497

8598
```rust,edition2018
8699
# extern crate async_std;

0 commit comments

Comments
 (0)