Skip to content

Commit e3450b5

Browse files
committed
move out administration/
1 parent 46c0f9f commit e3450b5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+319
-284
lines changed

ydb/docs/en/core/administration/production-storage-config.md

Lines changed: 0 additions & 78 deletions
This file was deleted.

ydb/docs/en/core/contributor/load-actors-storage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,4 +146,4 @@ StorageLoad: {
146146
}
147147
}
148148
```
149-
Calculated percentiles will only represent the requests of the main load cycle and won't include write requests sent during the initial data allocation. The [graphs in Monitoring](../administration/grafana-dashboards.md) should be of interest, for example, they allow to trace the request latency degradation caused by the increasing load.
149+
Calculated percentiles will only represent the requests of the main load cycle and won't include write requests sent during the initial data allocation. The [graphs in Monitoring](../reference/observability/metrics/grafana-dashboards.md) should be of interest, for example, they allow to trace the request latency degradation caused by the increasing load.

ydb/docs/en/core/contributor/manage-releases.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Thus, {{ ydb-short-name }} server major version is a combination of the first tw
2121

2222
### Compatibility {#server-compatibility}
2323

24-
{{ ydb-short-name }} maintains compatibility between major versions to ensure a cluster can operate while its nodes run two adjacent major versions of the YDB server executable. You may refer the [Updating {{ ydb-short-name }}](../administration/upgrade.md) article to learn more about the cluster upgrade procedure.
24+
{{ ydb-short-name }} maintains compatibility between major versions to ensure a cluster can operate while its nodes run two adjacent major versions of the YDB server executable. You may refer the [Updating {{ ydb-short-name }}](../devops/manual/upgrade.md) article to learn more about the cluster upgrade procedure.
2525

2626
Given the above compatibility target, major releases go in pairs: odd numbered releases add new functionality switched off by feature flags, and even numbered releases enable that functionality by default.
2727

ydb/docs/en/core/deploy/_includes/index.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,5 @@ This section provides information on deploying and configuring multi-node {{ ydb
88
* [Deployment with Kubernetes](../../devops/kubernetes/index.md).
99
* [Deployment on virtual and physical servers](../manual/deploy-ydb-on-premises.md).
1010
* [Configuration](../configuration/config.md).
11-
* [BlobStorage production configurations](../../administration/production-storage-config.md).
1211

1312
Step-by-step scenarios for rapidly deploying a local single-node cluster for development and testing are given in the [Quick start](../../quickstart.md) section.

ydb/docs/en/core/deploy/configuration/config.md

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -479,6 +479,86 @@ blob_storage_config:
479479

480480
For a configuration located in 3 availability zones, specify 3 rings. For a configuration within a single availability zone, specify exactly one ring.
481481

482+
## BlobStorage production configurations
483+
484+
To ensure the required fault tolerance of {{ ydb-short-name }}, configure the [cluster disk subsystem](../../concepts/cluster/distributed_storage.md) properly: select the appropriate [fault tolerance mode](#fault-tolerance) and [hardware configuration](#requirements) for your cluster.
485+
486+
### Fault tolerance modes {#fault-tolerance}
487+
488+
We recommend using the following [fault tolerance modes](../../concepts/topology.md) for {{ ydb-short-name }} production installations:
489+
490+
* `block-4-2`: For a cluster hosted in a single availability zone.
491+
* `mirror-3-dc`: For a cluster hosted in three availability zones.
492+
493+
A fail model of {{ ydb-short-name }} is based on concepts such as fail domain and fail realm.
494+
495+
Fail domain
496+
497+
: A set of hardware that may fail concurrently.
498+
499+
For example, a fail domain includes disks of the same server (as all server disks may be unavailable if the server PSU or network controller is down). A fail domain also includes servers located in the same server rack (as the entire hardware in the rack may be unavailable if there is a power outage or some issue with the network hardware in the same rack).
500+
501+
Any domain fail is handled automatically, without shutting down the system.
502+
503+
Fail realm
504+
505+
: A set of fail domains that may fail concurrently.
506+
507+
An example of a fail realm is hardware located in the same data center that may fail as a result of a natural disaster.
508+
509+
Usually a fail domain is a server rack, while a fail realm is a data center.
510+
511+
When creating a [storage group](../../concepts/databases.md#storage-groups), {{ ydb-short-name }} groups VDisks that are located on PDisks from different fail domains. For `block-4-2` mode, a PDisk should be distributed across at least 8 fail domains, and for `mirror-3-dc` mode, across 3 fail realms with at least 3 fail domains in each of them.
512+
513+
### Hardware configuration {#requirements}
514+
515+
If a disk fails, {{ ydb-short-name }} may automatically reconfigure a storage group so that, instead of the VDisk located on the failed hardware, a new VDisk is used that the system tries to place on hardware that is running while the group is being reconfigured. In this case, the same rule applies as when creating a group: a VDisk is created in a fail domain that is different from the fail domains of any other VDisk in this group (and in the same fail realm as that of the failed VDisk for `mirror-3-dc`).
516+
517+
This causes some issues when a cluster's hardware is distributed across the minimum required amount of fail domains:
518+
519+
* If the entire fail domain is down, reconfiguration no longer makes sense, since a new VDisk can only be located in the fail domain that is down.
520+
* If part of a fail domain is down, reconfiguration is possible, but the load that was previously handled by the failed hardware will only be redistributed across hardware in the same fail domain.
521+
522+
If the number of fail domains in a cluster exceeds the minimum amount required for creating storage groups at least by one (that is, 9 domains for `block-4-2` and 4 domains in each fail realm for `mirror-3-dc)`, in case some hardware fails, the load can be redistributed across all the hardware that is still running.
523+
524+
The system can work with fail domains of any size. However, if there are few domains and a different number of disks in different domains, the number of storage groups that you can create will be limited. In this case, some hardware in fail domains that are too large may be underutilized. If the hardware is used in full, significant distortions in domain sizes may make reconfiguration impossible.
525+
526+
For example, there are 15 racks in a cluster with `block-4-2` fault tolerance mode. The first of the 15 racks hosts 20 servers and the other 14 racks host 10 servers each. To fully utilize all the 20 servers from the first rack, {{ ydb-short-name }} will create groups so that 1 disk from this largest fail domain is used in each group. As a result, if any other fail domain's hardware is down, the load can't be distributed to the hardware in the first rack.
527+
528+
{{ ydb-short-name }} can group disks of different vendors, capacity, and speed. The resulting characteristics of a group depend on a set of the worst characteristics of the hardware that is serving the group. Usually the best results can be achieved if you use same-type hardware. When creating large clusters, keep in mind that hardware from the same batch is more likely to have the same defect and fail simultaneously.
529+
530+
Therefore, we recommend the following optimal hardware configurations for production installations:
531+
532+
* **A cluster hosted in 1 availability zone**: It uses `block4-2` fault tolerance mode and consists of 9 or more racks with the same amount of identical servers in each rack.
533+
* **A cluster hosted in 3 availability zones**: It uses `mirror3-dc` fault tolerance mode and is distributed across 3 data centers with 4 or more racks in each of them, the racks being equipped with the same amount of identical servers.
534+
535+
See also [{#T}](#reduced).
536+
537+
### Redundancy recovery {#rebuild}
538+
539+
Auto reconfiguration of storage groups reduces the risk of data loss in the event of multiple failures that occur within intervals sufficient to recover the redundancy. By default, reconfiguration is done one hour after {{ ydb-short-name }} detects a failure.
540+
541+
Once a group is reconfigured, a new VDisk is automatically populated with data to restore the required storage redundancy in the group. This increases the load on other VDisks in the group and the network. To reduce the impact of redundancy recovery on the system performance, the total data replication speed is limited both on the source and target VDisks.
542+
543+
The time it takes to restore the redundancy depends on the amount of data and hardware performance. For example, replication on fast NVMe SSDs may take an hour, while on large HDDs more than 24 hours. To make reconfiguration possible in general, a cluster should have free slots for creating VDisks in different fail domains. When determining the number of slots to be kept free, factor in the risk of hardware failure, the time it takes to replicate data and replace the failed hardware.
544+
545+
### Simplified hardware configurations {#reduced}
546+
547+
If it's not possible to use the [recommended amount](#requirements) of hardware, you can divide servers within a single rack into two dummy fail domains. In this configuration, a failure of 1 rack means a failure of 2 domains and not a single one. In [both fault tolerance modes](#fault-tolerance), {{ ydb-short-name }} will keep running if 2 domains fail. If you use the configuration with dummy fail domains, the minimum number of racks in a cluster is 5 for `block-4-2` mode and 2 in each data center for `mirror-3-dc` mode.
548+
549+
### Fault tolerance level {#reliability}
550+
551+
The table below describes fault tolerance levels for different fault tolerance modes and hardware configurations of a {{ ydb-short-name }} cluster:
552+
553+
Fault tolerance<br>mode | Fail<br>domain | Fail<br>realm | Number of<br>data centers | Number of<br>server racks | Fault tolerance<br>level
554+
:--- | :---: | :---: | :---: | :---: | :---
555+
`block-4-2` | Rack | Data center | 1 | 9 or more | Can stand a failure of 2 racks
556+
`block-4-2` | ½ a rack | Data center | 1 | 5 or more | Can stand a failure of 1 rack
557+
`block-4-2` | Server | Data center | 1 | Doesn't matter | Can stand a failure of 2 servers
558+
`mirror-3-dc` | Rack | Data center | 3 | 4 in each data center | Can stand a failure of a data center and 1 rack in one of the two other data centers
559+
`mirror-3-dc` | Server | Data center | 3 | Doesn't matter | Can stand a failure of a data center and 1 server in one of the two other data centers
560+
561+
482562
## Sample cluster configurations {#examples}
483563

484564
You can find model cluster configurations for deployment in the [repository](https://github.com/ydb-platform/ydb/tree/main/ydb/deploy/yaml_config_examples/). Check them out before deploying a cluster.

ydb/docs/en/core/deploy/toc_i.yaml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,4 @@ items:
33
href: manual/deploy-ydb-on-premises.md
44
- name: Configuration
55
href: configuration/config.md
6-
- name: BlobStorage production configurations
7-
href: ../administration/production-storage-config.md
86

Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{% note info %}
22

3-
{{ ydb-short-name }} cluster is fault tolerant. Temporarily shutting down a node doesn't affect the cluster availability. For details, see [{#T}](../concepts/topology.md).
3+
{{ ydb-short-name }} cluster is fault tolerant. Temporarily shutting down a node doesn't affect the cluster availability. For details, see [{#T}](../../concepts/topology.md).
44

55
{% endnote %}

0 commit comments

Comments
 (0)