Skip to content
This repository was archived by the owner on Apr 26, 2024. It is now read-only.

Commit 2989115

Browse files
authored
Fix typos in documentation (#12863)
1 parent e7c77a8 commit 2989115

File tree

4 files changed

+4
-3
lines changed

4 files changed

+4
-3
lines changed

changelog.d/12863.doc

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Fix typos in documentation.

docs/message_retention_policies.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ In this example, we define three jobs:
117117
Note that this example is tailored to show different configurations and
118118
features slightly more jobs than it's probably necessary (in practice, a
119119
server admin would probably consider it better to replace the two last
120-
jobs with one that runs once a day and handles rooms which which
120+
jobs with one that runs once a day and handles rooms which
121121
policy's `max_lifetime` is greater than 3 days).
122122

123123
Keep in mind, when configuring these jobs, that a purge job can become

docs/structured_logging.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ loggers:
4343
The above logging config will set Synapse as 'INFO' logging level by default,
4444
with the SQL layer at 'WARNING', and will log to a file, stored as JSON.
4545
46-
It is also possible to figure Synapse to log to a remote endpoint by using the
46+
It is also possible to configure Synapse to log to a remote endpoint by using the
4747
`synapse.logging.RemoteHandler` class included with Synapse. It takes the
4848
following arguments:
4949

docs/workers.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Scaling synapse via workers
22

3-
For small instances it recommended to run Synapse in the default monolith mode.
3+
For small instances it is recommended to run Synapse in the default monolith mode.
44
For larger instances where performance is a concern it can be helpful to split
55
out functionality into multiple separate python processes. These processes are
66
called 'workers', and are (eventually) intended to scale horizontally

0 commit comments

Comments
 (0)