Skip to content

YDBDOCS-633: continue restructuring #3919

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 22, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ydb/docs/en/core/_includes/fault-tolerance.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{% note info %}

YDB cluster is fault tolerant. Temporarily shutting down a node doesn't affect the cluster availability. For details, see [{#T}](../cluster/topology.md).
{{ ydb-short-name }} cluster is fault tolerant. Temporarily shutting down a node doesn't affect the cluster availability. For details, see [{#T}](../concepts/topology.md).

{% endnote %}
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ To ensure the required fault tolerance of {{ ydb-short-name }}, configure the [c

## Fault tolerance modes {#fault-tolerance}

We recommend using the following [fault tolerance modes](../cluster/topology.md) for {{ ydb-short-name }} production installations:
We recommend using the following [fault tolerance modes](../concepts/topology.md) for {{ ydb-short-name }} production installations:

* `block-4-2`: For a cluster hosted in a single availability zone.
* `mirror-3-dc`: For a cluster hosted in three availability zones.
Expand Down Expand Up @@ -40,7 +40,7 @@ This causes some issues when a cluster's hardware is distributed across the mini

If the number of fail domains in a cluster exceeds the minimum amount required for creating storage groups at least by one (that is, 9 domains for `block-4-2` and 4 domains in each fail realm for `mirror-3-dc)`, in case some hardware fails, the load can be redistributed across all the hardware that is still running.

The system can work with fail domains of any size. However, if there are few domains and a different number of disks in different domains, the amount of storage groups that you can create will be limited. In this case, some hardware in fail domains that are too large may be underutilized. If the hardware is used in full, significant distortions in domain sizes may make reconfiguration impossible.
The system can work with fail domains of any size. However, if there are few domains and a different number of disks in different domains, the number of storage groups that you can create will be limited. In this case, some hardware in fail domains that are too large may be underutilized. If the hardware is used in full, significant distortions in domain sizes may make reconfiguration impossible.

> For example, there are 15 racks in a cluster with `block-4-2` fault tolerance mode. The first of the 15 racks hosts 20 servers and the other 14 racks host 10 servers each. To fully utilize all the 20 servers from the first rack, {{ ydb-short-name }} will create groups so that 1 disk from this largest fail domain is used in each group. As a result, if any other fail domain's hardware is down, the load can't be distributed to the hardware in the first rack.

Expand Down
2 changes: 1 addition & 1 deletion ydb/docs/en/core/administration/static-group-move.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,5 +65,5 @@ To move a part of the static group from the `node_id:1` host to the `node_id:10`

1. Update the `config.yaml` configuration files for all the cluster nodes, including dynamic nodes.
1. Use the [rolling-restart](../maintenance/manual/node_restarting.md) procedure to restart all the static cluster nodes.
1. Go to the Embedded UI monitoring page and make sure that the VDisk of the static group is visible on the target physical disk and its replication is in progress. For details, see [{#T}](../maintenance/embedded_monitoring/ydb_monitoring.md#static-group).
1. Go to the Embedded UI monitoring page and make sure that the VDisk of the static group is visible on the target physical disk and its replication is in progress. For details, see [{#T}](../reference/embedded-ui/ydb-monitoring.md#static-group).
1. Use the [rolling-restart](../maintenance/manual/node_restarting.md) procedure to restart all the dynamic cluster nodes.
4 changes: 2 additions & 2 deletions ydb/docs/en/core/administration/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ The basic scenario is updating the executable file and restarting each node one
1. Updating and restarting dynamic nodes.

The shutdown and startup process is described on the [Safe restart and shutdown of nodes](../maintenance/manual/node_restarting.md) page.
You must update {{ ydb-short-name }} nodes one by one and monitor the cluster status after each step in [{{ ydb-short-name }} Monitoring](../maintenance/embedded_monitoring/ydb_monitoring.md): make sure the `Storage` tab has no pools in the `Degraded` status (as shown in the example below). Otherwise, stop the update process.
You must update {{ ydb-short-name }} nodes one by one and monitor the cluster status after each step in [{{ ydb-short-name }} Monitoring](../reference/embedded-ui/ydb-monitoring.md): make sure the `Storage` tab has no pools in the `Degraded` status (as shown in the example below). Otherwise, stop the update process.

![Monitoring_storage_state](../maintenance/embedded_monitoring/_assets/monitoring_storage_state.png)
![Monitoring_storage_state](../reference/embedded-ui/_assets/monitoring_storage_state.png)

## Version compatibility {#version-compatability}

Expand Down
4 changes: 2 additions & 2 deletions ydb/docs/en/core/changelog-server.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Release date: October 12, 2023.
* A new option `PostgreSQL` has been added to the query type selector settings, which is available when the `Enable additional query modes` parameter is enabled. Also, the query history now takes into account the syntax used when executing the query.
* The YQL query template for creating a table has been updated. Added a description of the available parameters.
* Now sorting and filtering for Storage and Nodes tables takes place on the server. To use this functionality, you need to enable the parameter `Offload tables filters and sorting to backend` in the experiments section.
* Buttons for creating, changing and deleting [topics](https://ydb.tech/ru/docs/concepts/topic) have been added to the context menu.
* Buttons for creating, changing and deleting [topics](concepts/topic.md) have been added to the context menu.
* Added sorting by criticality for all issues in the tree in `Healthcheck`.

**Performance:**
Expand Down Expand Up @@ -155,7 +155,7 @@ Release date: May 5, 2023. To update to version 23.1, select the [Downloads](dow

* Added [initial table scan](concepts/cdc.md#initial-scan) when creating a CDC changefeed. Now, you can export all the data existing at the time of changefeed creation.
* Added [atomic index replacement](dba/secondary-indexes.md#atomic-index-replacement). Now, you can atomically replace one pre-defined index with another. This operation is absolutely transparent for your application. Indexes are replaced seamlessly, with no downtime.
* Added the [audit log](cluster/audit-log.md): Event stream including data about all the operations on {{ ydb-short-name }} objects.
* Added the [audit log](security/audit-log.md): Event stream including data about all the operations on {{ ydb-short-name }} objects.

**Performance:**

Expand Down
10 changes: 0 additions & 10 deletions ydb/docs/en/core/cluster/index.md

This file was deleted.

17 changes: 0 additions & 17 deletions ydb/docs/en/core/cluster/logs.md

This file was deleted.

37 changes: 0 additions & 37 deletions ydb/docs/en/core/cluster/system-requirements.md

This file was deleted.

27 changes: 0 additions & 27 deletions ydb/docs/en/core/cluster/toc_i.yaml

This file was deleted.

2 changes: 1 addition & 1 deletion ydb/docs/en/core/concepts/auth.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Authentication by username and password includes the following steps:

To enable username/password authentication, use `true` in the `enforce_user_token_requirement` key of the cluster's [configuration file](../deploy/configuration/config.md#auth).

To learn how to manage roles and users, see [{#T}](../cluster/access.md).
To learn how to manage roles and users, see [{#T}](../security/access-management.md).

<!-- ### API получения токенов IAM {#token-refresh-api}

Expand Down
2 changes: 2 additions & 0 deletions ydb/docs/en/core/concepts/toc_i.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ items:
items:
- name: Overview
href: cluster/index.md
- name: Topology
href: topology.md
- name: General YDB schema
href: cluster/common_scheme_ydb.md
- name: Disk subsystem of a cluster
Expand Down
4 changes: 3 additions & 1 deletion ydb/docs/en/core/dba/toc_p.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
items:
- name: Overview
href: index.md
- include: { mode: link, path: toc_i.yaml }
- include:
mode: link
path: toc_i.yaml
8 changes: 4 additions & 4 deletions ydb/docs/en/core/deploy/_includes/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Deploying YDB clusters
# Deploying {{ ydb-short-name }} clusters

This section provides information on deploying and configuring multi-node YDB clusters that serve multiple databases.
This section provides information on deploying and configuring multi-node {{ ydb-short-name }} clusters that serve multiple databases.

* [{#T}](../../cluster/system-requirements.md).
* [{#T}](../../cluster/topology.md).
* [{#T}](../../devops/system-requirements.md).
* [{#T}](../../concepts/topology.md).
* [Deployment with Ansible](../../devops/ansible/index.md).
* [Deployment with Kubernetes](../../devops/kubernetes/index.md).
* [Deployment on virtual and physical servers](../manual/deploy-ydb-on-premises.md).
Expand Down
6 changes: 3 additions & 3 deletions ydb/docs/en/core/deploy/configuration/config.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Cluster configuration
# {{ ydb-short-name }} cluster configuration

The cluster configuration is specified in the YAML file passed in the `--yaml-config` parameter when the cluster nodes are run.

Expand Down Expand Up @@ -147,7 +147,7 @@ This section defines one or more types of storage pools available in the cluster
- Data encryption (on/off)
- Fault tolerance mode

The following [fault tolerance modes](../../cluster/topology.md) are available:
The following [fault tolerance modes](../../concepts/topology.md) are available:

| Mode | Description |
--- | ---
Expand Down Expand Up @@ -389,7 +389,7 @@ You can set up your actor system either [automatically](#autoconfig) or [manuall

Automatic configuring adapts to the current system workload. It is recommended in most cases.

You might opt for manual configuring when a certain pool in your actor system is overwhelmed and undermines the overall database performance. You can track the workload on your pools on the [Embedded UI monitoring page](../../maintenance/embedded_monitoring/ydb_monitoring.md#node_list_page).
You might opt for manual configuring when a certain pool in your actor system is overwhelmed and undermines the overall database performance. You can track the workload on your pools on the [Embedded UI monitoring page](../../reference/embedded-ui/ydb-monitoring.md#node_list_page).

### Automatic configuring {#autoconfig}

Expand Down
10 changes: 5 additions & 5 deletions ydb/docs/en/core/deploy/manual/deploy-ydb-on-premises.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This document describes how to deploy a multi-tenant {{ ydb-short-name }} cluste

### Prerequisites {#requirements}

Review the [system requirements](../../cluster/system-requirements.md) and the [cluster topology](../../cluster/topology.md).
Review the [system requirements](../../devops/system-requirements.md) and the [cluster topology](../../concepts/topology.md).

Make sure you have SSH access to all servers. This is required to install artifacts and run the {{ ydb-short-name }} executable.

Expand All @@ -17,7 +17,7 @@ The network configuration must allow TCP connections on the following ports (the
* 19001, 19002: Interconnect for intra-cluster node interaction
* 8765, 8766: HTTP interface of {{ ydb-short-name }} Embedded UI.

Distinct ports are necessary for GRPC, Interconnect and HTTP interface of each dynamic node when hosting multiple dynamic nodes on a single server.
Distinct ports are necessary for gRPC, Interconnect and HTTP interface of each dynamic node when hosting multiple dynamic nodes on a single server.

Make sure that the system clocks running on all the cluster's servers are synced by `ntpd` or `chrony`. We recommend using the same time source for all servers in the cluster to maintain consistent leap seconds processing.

Expand All @@ -34,7 +34,7 @@ Run each static node (data node) on a separate server. Both static and dynamic n

{% endnote %}

For more information about hardware requirements, see [{#T}](../../cluster/system-requirements.md).
For more information about hardware requirements, see [{#T}](../../devops/system-requirements.md).

### Preparing TLS keys and certificates {#tls-certificates}

Expand Down Expand Up @@ -403,7 +403,7 @@ Run additional dynamic nodes on other servers to ensure database scalability and

If authentication mode is enabled in the cluster configuration file, initial account setup must be done before working with the {{ ydb-short-name }} cluster.

The initial installation of the {{ ydb-short-name }} cluster automatically creates a `root` account with a blank password, as well as a standard set of user groups described in the [Access management](../../cluster/access.md) section.
The initial installation of the {{ ydb-short-name }} cluster automatically creates a `root` account with a blank password, as well as a standard set of user groups described in the [Access management](../../security/access-management.md) section.

To perform initial account setup in the created {{ ydb-short-name }} cluster, run the following operations:

Expand Down Expand Up @@ -455,7 +455,7 @@ To check access to the {{ ydb-short-name }} built-in web interface, open in the

In the web browser, set as trusted the certificate authority that issued certificates for the {{ ydb-short-name }} cluster. Otherwise, you will see a warning about an untrusted certificate.

If authentication is enabled in the cluster, the web browser should prompt you for a login and password. Enter your credentials, and you'll see the built-in interface welcome page. The user interface and its features are described in [{#T}](../../maintenance/embedded_monitoring/index.md).
If authentication is enabled in the cluster, the web browser should prompt you for a login and password. Enter your credentials, and you'll see the built-in interface welcome page. The user interface and its features are described in [{#T}](../../reference/embedded-ui/index.md).

{% note info %}

Expand Down
6 changes: 0 additions & 6 deletions ydb/docs/en/core/deploy/toc_p.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,4 @@
items:
- name: Overview
href: index.md
- name: System requirements and recommendations
href: ../cluster/system-requirements.md
- name: Logging
href: ../cluster/logs.md
- name: Topology
href: ../cluster/topology.md
- include: { mode: link, path: toc_i.yaml }
Loading
Loading