diff --git a/logging/log_collection_forwarding/cluster-logging-collector.adoc b/logging/log_collection_forwarding/cluster-logging-collector.adoc index 1b02bd1da94a..3bc9ec8295a1 100644 --- a/logging/log_collection_forwarding/cluster-logging-collector.adoc +++ b/logging/log_collection_forwarding/cluster-logging-collector.adoc @@ -20,10 +20,18 @@ include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1] include::modules/cluster-logging-collector-limits.adoc[leveloffset=+1] -//include::modules/log-collector-rsyslog-server.adoc[leveloffset=+1] -// uncomment for 5.9 release +[id="cluster-logging-collector-input-receivers"] +== Configuring input receivers + +The {clo} deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. +The service name is generated based on the following: -include::modules/log-collector-http-server.adoc[leveloffset=+1] +* For multi log forwarder `ClusterLogForwarder` CR deployments, the service name is in the format `-`. For example, `example-http-receiver`. +* For legacy `ClusterLogForwarder` CR deployments, meaning those named `instance` and located in the `openshift-logging` namespace, the service name is in the format `collector-`. For example, `collector-http-receiver`. + +//include::modules/log-collector-rsyslog-server.adoc[leveloffset=+2] +// uncomment for 5.9 release +include::modules/log-collector-http-server.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources diff --git a/modules/log-collector-http-server.adoc b/modules/log-collector-http-server.adoc index ad4f37accfa7..8bdb3720a87a 100644 --- a/modules/log-collector-http-server.adoc +++ b/modules/log-collector-http-server.adoc @@ -19,6 +19,8 @@ You can configure your log collector to listen for HTTP connections and receive . Modify the `ClusterLogForwarder` CR to add configuration for the `http` receiver input: + +-- +.Example `ClusterLogForwarder` CR if you are using a multi log forwarder deployment [source,yaml] ---- apiVersion: logging.openshift.io/v1beta1 @@ -42,11 +44,42 @@ spec: ---- <1> Specify a name for your input receiver. <2> Specify the input receiver type as `http`. -<3> Currently, only the the `kube-apiserver` webhook format is supported for `http` input receivers. +<3> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers. <4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443` if this is not specified. <5> Configure a pipeline for your input receiver. +-- ++ +-- +.Example `ClusterLogForwarder` CR if you are using a legacy deployment +[source,yaml] +---- +apiVersion: logging.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: instance + namespace: openshift-logging +spec: + inputs: + - name: http-receiver # <1> + receiver: + type: http # <2> + http: + format: kubeAPIAudit # <3> + port: 8443 # <4> + pipelines: # <5> + - inputRefs: + - http-receiver + name: http-pipeline +# ... +---- +<1> Specify a name for your input receiver. +<2> Specify the input receiver type as `http`. +<3> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers. +<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443` if this is not specified. +<5> Configure a pipeline for your input receiver. +-- -. Apply the changes to the `ClusterLogForwarder` CR: +. Apply the changes to the `ClusterLogForwarder` CR by running the following command: + [source,terminal] ----