Skip to content

Preparing documentation #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: Logging-6.0-PoC
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .komment/.gitkeep
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1716985331045
5 changes: 5 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2709,6 +2709,11 @@ Topics:
File: logging-5-8-release-notes
- Name: Logging 5.7
File: logging-5-7-release-notes
- Name: Logging 6.0
Dir: logging-6.0
Topics:
- Name: Introducing Logging 6.0
File: logging-poc
- Name: Support
File: cluster-logging-support
- Name: Troubleshooting logging
Expand Down
28 changes: 28 additions & 0 deletions modules/logging-6x-about.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
// Module included in the following assemblies:
//


:_mod-docs-content-type: CONCEPT
[id="logging-6x-about_{context}"]


= What is OpenShift Logging?

OpenShift Logging provides a comprehensive log management solution integrated into your OpenShift clusters. It utilizes:

* **Loki:** A horizontally scalable, highly available log aggregation system optimized for efficient storage and retrieval of log data in dynamic OpenShift environments.
* **Vector:** A lightweight, high-performance log forwarding agent that connects your application and infrastructure logs to Loki, offering a flexible and customizable way to collect, process, and route log data.

== How OpenShift Logging Works

1. **Log Collection:** Vector agents, deployed on each node in your cluster, collect logs from various sources, including containers, system components, and applications.
2. **Log Processing (Optional):** Vector can filter, enrich, and transform log data before forwarding it to Loki.
3. **Log Aggregation:** Loki receives the processed log data from Vector and stores it efficiently for later retrieval.
4. **Log Querying & Analysis:** Use Loki's powerful query language to search, filter, and analyze logs, gaining valuable insights into your OpenShift environment.

== Visualizing Logs in OpenShift

OpenShift Logging provides multiple ways to visualize and explore your log data:

* **OpenShift Web Console:** The integrated Logs page offers a user-friendly interface for searching, filtering, and viewing logs directly within the OpenShift console.
* **Grafana:** OpenShift Logging integrates seamlessly with Grafana, enabling you to create rich, customizable dashboards to visualize and correlate log data with metrics and other observability data.
15 changes: 15 additions & 0 deletions modules/logging-6x-loki-architecture.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
// Module included in the following assemblies:
//


:_mod-docs-content-type: CONCEPT
[id="logging-6x-loki-architecture_{context}"]
= Loki Architecture: Understanding the Components

Loki's architecture is comprised of several key components, each playing a specific role in log processing and storage:

* **Distributor:** Receives incoming logs from Vector agents and distributes them across multiple Ingesters.
* **Ingester:** Responsible for building chunks of log data and creating an index for efficient retrieval.
* **Querier:** Processes incoming log queries and fetches relevant log data from the storage backend.
* **Query Frontend:** Manages and coordinates incoming queries, distributes the workload across Queriers, and returns the results to the user.
* **Storage Backend:** Stores the log chunks and indexes, often using object storage services like S3 or GCS.
33 changes: 33 additions & 0 deletions modules/logging-6x-v-5x.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//


:_mod-docs-content-type: PROCEDURE
[id="logging-6x-v-5x_{context}"]
= OpenShift Logging 6.0: Key Differences from Previous Versions (5.0-5.9)

OpenShift Logging 6.0 introduces several significant changes compared to previous versions, streamlining the logging architecture and enhancing the overall logging experience.

== Simplified Architecture

* **Removal of Elasticsearch and Fluentd:** OpenShift Logging 6.0 eliminates Elasticsearch and Fluentd as default components, streamlining the architecture for improved performance and reduced complexity.
* **Focus on Loki and Vector:** The logging stack now centers around Loki for log aggregation and Vector for log collection and forwarding, providing a more unified and efficient solution.

== Enhanced Scalability and Performance

* **Horizontal Scalability:** Loki's architecture allows for seamless horizontal scaling, ensuring that the logging system can handle the growing demands of large-scale OpenShift deployments.
* **Improved Query Performance:** Loki's indexing and query mechanisms are optimized for speed, enabling faster log retrieval and analysis.

== Enhanced User Experience

* **Streamlined Configuration:** The removal of Elasticsearch and Fluentd simplifies the configuration process, making it easier to set up and manage your logging infrastructure.
* **Improved Integration:** Loki and Vector offer tighter integration with OpenShift, providing a more seamless and native logging experience.

== Additional Enhancements

* **Expanded Log Sources:** Vector's flexibility allows you to collect logs from an even wider range of sources, providing greater visibility into your OpenShift environment.
* **Enhanced Log Processing:** Vector's processing capabilities have been expanded, enabling you to perform more complex log transformations and filtering.

== Transitioning to OpenShift Logging 6.0

If you're upgrading from a previous version of OpenShift Logging, the transition to 6.0 will involve migrating your existing log data and reconfiguring your logging pipeline to utilize Loki and Vector.
186 changes: 186 additions & 0 deletions modules/logging-elastic-to-loki-migration.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
// Module included in the following assemblies:
//


:_mod-docs-content-type: PROCEDURE
[id="logging-elastic-to-loki-migration_{context}"]
= Migrating the Default Log Store from Elasticsearch to Loki in OpenShift

This guide describes how to switch the OpenShift Logging storage service from Elasticsearch to LokiStack. It focuses on log forwarding, not data migration. After following these steps, old logs will remain in Elasticsearch (accessible via Kibana), while new logs will go to LokiStack (visible in the OpenShift Console).

.Prerequisites

* Red Hat OpenShift Logging Operator (v5.5.5+)
* OpenShift Elasticsearch Operator (v5.5.5+)
* Red Hat Loki Operator (v5.5.5+)
* Sufficient resources on target nodes to run both Elasticsearch and LokiStack (see LokiStack Deployment Sizing Table in the documentation).

== Installing LokiStack

. **Install the Loki Operator:** Follow the official guide to install the Loki Operator using the OpenShift web console.
.. **Create a Secret for Loki Object Storage:** Create a secret for Loki object storage (e.g., AWS S3). Refer to the documentation for other object storage types.

[source,bash]
----
$ cat << EOF |oc create -f -
apiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3
namespace: openshift-logging
data:
access_key_id: $(echo "PUT_S3_ACCESS_KEY_ID_HERE" | base64 -w0)
access_key_secret: $(echo "PUT_S3_ACCESS_KEY_SECRET_HERE" | base64 -w0)
bucketnames: $(echo "s3-bucket-name" | base64 -w0)
endpoint: $(echo "https://s3.eu-central-1.amazonaws.com" | base64 -w0)
region: $(echo "eu-central-1" | base64 -w0)
EOF
----


... **Deploy LokiStack Custom Resource (CR):**

[NOTE]
====
Change the `spec.size` if needed.
====

[source,bash]
----
$ cat << EOF |oc create -f -
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
size: 1x.small
storage:
schemas:
- version: v12
effectiveDate: '2022-06-01'
secret:
name: logging-loki-s3
type: s3
storageClassName: gp2
tenants:
mode: openshift-logging
EOF
----

== Disconnecting Elasticsearch and Kibana

To keep Elasticsearch and Kibana running while transitioning:

. **Set `ClusterLogging` to Unmanaged:**

[source,bash]
----
oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge
----

.. **Remove Owner References:** Remove `ClusterLogging` owner references from Elasticsearch and Kibana resources:

[source,bash]
----
$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge
----

[source,bash]
----
$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge
----

... **Back Up Elasticsearch and Kibana Resources:** Use `yq` to back up these resources to prevent accidental deletion: link:https://github.com/mikefarah/yq[yq utility]

For Elasticsearch:

[source,bash]
----
$ oc -n openshift-logging get elasticsearch elasticsearch -o yaml \
| yq 'del(.metadata.resourceVersion) | del(.metadata.uid)' \
| yq 'del(.metadata.generation) | del(.metadata.creationTimestamp)' \
| yq 'del(.metadata.selfLink) | del(.status)' > /tmp/cr-elasticsearch.yaml
----

For Kibana:

[source,bash]
----
$ oc -n openshift-logging get kibana kibana -o yaml \
| yq 'del(.metadata.resourceVersion) | del(.metadata.uid)' \
| yq 'del(.metadata.generation) | del(.metadata.creationTimestamp)' \
| yq 'del(.metadata.selfLink) | del(.status)' > /tmp/cr-kibana.yaml
----

== Switching to LokiStack

. Switch Log Storage to LokiStack
The following manifest will apply several changes to the `ClusterLogging` resource:
* Re-instate the management state to `Managed`.
* Switch the `logStore` spec from `elasticsearch` to `lokistack`, restarting the collector pods to start forwarding logs to `lokistack`.
* Remove the `visualization` spec, prompting the cluster-logging-operator to install the `logging-view-plugin` for observing `lokistack` logs in the OpenShift Console.
* If the collection type is not `fluentd`, replace it with `vector`.

[source,yaml]
----
$ cat << EOF |oc replace -f -
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
collection:
logs:
type: fluentd
fluentd: {}
visualization:
type: kibana
kibana:
replicas: 1
EOF
----

. Re-instantiate Kibana Resource

In the previous step, removing the `visualization` field prompted the operator to remove the `Kibana` resource. Re-instantiate the `Kibana` resource using the backup created earlier.

[source,bash]
----
$ oc -n openshift-logging apply -f /tmp/cr-kibana.yaml
----

. Enable the Console View Plugin

Enable the console view plugin to view the logs integrated from the OpenShift Console (Observe > Logs).

[source,bash]
----
$ oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ "spec": { "plugins": ["logging-view-plugin"] } }'
----

== Delete the Elasticsearch Stack

Once the retention period for logs stored in Elasticsearch expires and no more logs are visible in Kibana, remove the old stack to release resources.

=== Step 1: Delete Elasticsearch and Kibana Resources

[source,bash]
----
$ oc -n openshift-logging delete kibana/kibana elasticsearch/elasticsearch
----

===

Step 2: Delete the PVCs Used by Elasticsearch Instances

[source,bash]
----
$ oc delete -n openshift-logging pvc -l logging-cluster=elasticsearch
----
15 changes: 15 additions & 0 deletions observability/logging/logging-6.0/logging-poc.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="logging-6x"]
= Logging 6.0
:context: logging-6x

toc::[]

include::modules/logging-6x-v-5x.adoc[leveloffset=+1]

include::modules/logging-elastic-to-loki-migration.adoc[leveloffset=+1]

include::modules/logging-6x-about.adoc[leveloffset=+1]

include::modules/logging-6x-loki-architecture.adoc[leveloffset=+1]