Skip to content

Commit c709ae5

Browse files
authored
Add descriptions (#755)
* Add descriptions * Add pod-overrides info
1 parent 40509b1 commit c709ae5

File tree

9 files changed

+26
-18
lines changed

9 files changed

+26
-18
lines changed

docs/modules/kafka/pages/getting_started/first_steps.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
= First steps
2+
:description: Deploy and verify a Kafka cluster on Kubernetes with Stackable Operators, including ZooKeeper setup and data testing using kcat.
23

34
After going through the xref:getting_started/installation.adoc[] section and having installed all the operators, you will now deploy a Kafka cluster and the required dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by producing test data into a topic and consuming it.
45

docs/modules/kafka/pages/getting_started/index.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
= Getting started
2+
:description: Start with Apache Kafka using Stackable Operator: Install, set up Kafka, and manage topics in a Kubernetes cluster.
23

3-
This guide will get you started with Apache Kafka using the Stackable Operator. It will guide you through the installation of the Operator and its dependencies, setting up your first Kafka instance and create, write to and read from a topic.
4+
This guide will get you started with Apache Kafka using the Stackable Operator.
5+
It will guide you through the installation of the Operator and its dependencies, setting up your first Kafka instance and create, write to and read from a topic.
46

57
== Prerequisites
68

docs/modules/kafka/pages/getting_started/installation.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
= Installation
2+
:description: Install Stackable Operator for Apache Kafka using stackablectl or Helm, including dependencies like ZooKeeper and required operators for Kubernetes.
23

34
On this page you will install the Stackable Operator for Apache Kafka and operators for its dependencies - ZooKeeper -
45
as well as the commons, secret and listener operator which are required by all Stackable Operators.

docs/modules/kafka/pages/index.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
= Stackable Operator for Apache Kafka
2-
:description: The Stackable operator for Apache Superset is a Kubernetes operator that can manage Apache Kafka clusters. Learn about its features, resources, dependencies and demos, and see the list of supported Kafka versions.
2+
:description: Deploy and manage Apache Kafka clusters on Kubernetes using Stackable Operator.
33
:keywords: Stackable operator, Apache Kafka, Kubernetes, operator, SQL, engineer, broker, big data, CRD, StatefulSet, ConfigMap, Service, Druid, ZooKeeper, NiFi, S3, demo, version
44
:kafka: https://kafka.apache.org/
55
:github: https://github.com/stackabletech/kafka-operator/

docs/modules/kafka/pages/usage-guide/configuration-environment-overrides.adoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,3 +86,8 @@ servers:
8686
default:
8787
replicas: 1
8888
----
89+
90+
== Pod overrides
91+
92+
The Kafka operator also supports Pod overrides, allowing you to override any property that you can set on a Kubernetes Pod.
93+
Read the xref:concepts:overrides.adoc#pod-overrides[Pod overrides documentation] to learn more about this feature.

docs/modules/kafka/pages/usage-guide/logging.adoc

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
= Log aggregation
2+
:description: The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent
23

3-
The logs can be forwarded to a Vector log aggregator by providing a discovery
4-
ConfigMap for the aggregator and by enabling the log agent:
4+
The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent:
55

66
[source,yaml]
77
----
@@ -14,5 +14,4 @@ spec:
1414
enableVectorAgent: true
1515
----
1616

17-
Further information on how to configure logging, can be found in
18-
xref:concepts:logging.adoc[].
17+
Further information on how to configure logging, can be found in xref:concepts:logging.adoc[].
Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
= Monitoring
2+
:description: The managed Kafka instances are automatically configured to export Prometheus metrics.
23

3-
The managed Kafka instances are automatically configured to export Prometheus metrics. See
4-
xref:operators:monitoring.adoc[] for more details.
4+
The managed Kafka instances are automatically configured to export Prometheus metrics.
5+
See xref:operators:monitoring.adoc[] for more details.

docs/modules/kafka/pages/usage-guide/security.adoc

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
= Security
2+
:description: Configure TLS encryption, authentication, and Open Policy Agent (OPA) authorization for Kafka with the Stackable Operator.
23

34
== Encryption
45

5-
The internal and client communication can be encrypted TLS. This requires the xref:secret-operator:index.adoc[Secret
6-
Operator] to be present in order to provide certificates. The utilized certificates can be changed in a top-level config.
6+
The internal and client communication can be encrypted TLS. This requires the xref:secret-operator:index.adoc[Secret Operator] to be present in order to provide certificates.
7+
The utilized certificates can be changed in a top-level config.
78

89
[source,yaml]
910
----
@@ -47,14 +48,12 @@ spec:
4748
autoGenerate: true
4849
----
4950

50-
You can create your own secrets and reference them e.g. in the `spec.clusterConfig.tls.serverSecretClass` or
51-
`spec.clusterConfig.tls.internalSecretClass` to use different certificates.
51+
You can create your own secrets and reference them e.g. in the `spec.clusterConfig.tls.serverSecretClass` or `spec.clusterConfig.tls.internalSecretClass` to use different certificates.
5252

5353
== Authentication
5454

55-
The internal or broker-to-broker communication is authenticated via TLS. In order to enforce TLS authentication for
56-
client-to-server communication, you can set an `AuthenticationClass` reference in the custom resource provided by the
57-
xref:commons-operator:index.adoc[Commons Operator].
55+
The internal or broker-to-broker communication is authenticated via TLS.
56+
In order to enforce TLS authentication for client-to-server communication, you can set an `AuthenticationClass` reference in the custom resource provided by the xref:commons-operator:index.adoc[Commons Operator].
5857

5958
[source,yaml]
6059
----
@@ -105,9 +104,8 @@ spec:
105104

106105
== [[authorization]]Authorization
107106

108-
If you wish to include integration with xref:opa:index.adoc[Open Policy Agent] and already have an OPA cluster, then you
109-
can include an `opa` field pointing to the OPA cluster discovery `ConfigMap` and the required package. The package is
110-
optional and will default to the `metadata.name` field:
107+
If you wish to include integration with xref:opa:index.adoc[Open Policy Agent] and already have an OPA cluster, then you can include an `opa` field pointing to the OPA cluster discovery `ConfigMap` and the required package.
108+
The package is optional and will default to the `metadata.name` field:
111109

112110
[source,yaml]
113111
----

docs/modules/kafka/pages/usage-guide/storage-resources.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
= Storage and resource configuration
2+
:description: Configure storage and resource allocation for Kafka brokers using Stackable Operator, including PersistentVolumeClaims, CPU, memory, and storage defaults.
23

34
== Storage for data volumes
45

0 commit comments

Comments
 (0)