Skip to content

Commit 61821df

Browse files
committed
Fixed a bunch of refs
1 parent 4053e67 commit 61821df

File tree

7 files changed

+26
-26
lines changed

7 files changed

+26
-26
lines changed

modules/concepts/pages/opa.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Policy requests are made to a REST API, which allows easy requests from microser
1515

1616
== How it works
1717
// How it is deployed
18-
OPA is run by the xref:opa::index.adoc[Stackable OPA operator]. OPA is deployed with the OpaCluster resource, from which the operator creates a DaemonSet to run an OPA instance on every node of the cluster. Because of this, every Pod making policy requests will always make the request locally, minimizing latency and network traffic.
18+
OPA is run by the xref:opa:index.adoc[Stackable OPA operator]. OPA is deployed with the OpaCluster resource, from which the operator creates a DaemonSet to run an OPA instance on every node of the cluster. Because of this, every Pod making policy requests will always make the request locally, minimizing latency and network traffic.
1919

2020
=== Define policies
2121

@@ -79,10 +79,10 @@ The automatic connection is facilitated by the xref:service_discovery.adoc[servi
7979

8080
== Further reading
8181

82-
Read more about the xref:opa::index.adoc[]. Read more about product integration with OPA for these products:
82+
Read more about the xref:opa:index.adoc[]. Read more about product integration with OPA for these products:
8383

84-
* xref:trino:usage_guide:security.adoc#_authorization[Trino]
85-
* xref:kafka::usage.adoc[Kafka]
86-
* xref:druid::usage.adoc#_using_open_policy_agent_opa_for_authorization[Druid]
84+
* xref:trino:usage_guide/security.adoc#_authorization[Trino]
85+
* xref:kafka:usage.adoc[Kafka]
86+
* xref:druid:usage.adoc#_using_open_policy_agent_opa_for_authorization[Druid]
8787

8888
You can also have a look at the xref:contributor:opa_configuration.adoc[implementation guidelines for OPA authorizers] or learn more about the xref:service_discovery.adoc[service discovery mechanism] used across the platform.

modules/concepts/pages/pvc.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,10 +81,10 @@ Managed Kubernetes clusters will normally have a default storage implementation
8181
== Operator usage
8282

8383
=== Airflow
84-
The xref:airflow::index.adoc[Airflow operator] can use a xref:airflow::usage.adoc#_via_persistentvolumeclaim[PersistentVolumeClaim for externally-loaded airflow jobs], known as Direct Acyclic Graphs (DAGs), as shown in this https://github.com/stackabletech/airflow-operator/blob/main/examples/simple-airflow-cluster-dags-pvc.yaml[example]. Airflow expects that all components (webserver, scheduler and workers) have access to the DAG folder, so the PersistentVolumeClaim either has to be accessible with the `ReadWriteMany` access mode or node selection should be declared to ensure all components run on the same node.
84+
The xref:airflow:index.adoc[Airflow operator] can use a xref:airflow:usage.adoc#_via_persistentvolumeclaim[PersistentVolumeClaim for externally-loaded airflow jobs], known as Direct Acyclic Graphs (DAGs), as shown in this https://github.com/stackabletech/airflow-operator/blob/main/examples/simple-airflow-cluster-dags-pvc.yaml[example]. Airflow expects that all components (webserver, scheduler and workers) have access to the DAG folder, so the PersistentVolumeClaim either has to be accessible with the `ReadWriteMany` access mode or node selection should be declared to ensure all components run on the same node.
8585

8686
=== Spark-k8s
87-
Users of the xref:spark-k8s::index.adoc[Spark-k8s operator] have a variety of ways to manage SparkApplication dependencies, one of which is to xref:spark-k8s::usage.adoc#_pyspark_externally_located_dataset_artifact_available_via_pvcvolume_mount[mount resources on a PersistentVolumeClaim]. An example is shown https://github.com/stackabletech/spark-k8s-operator/blob/main/examples/ny-tlc-report.yaml[here].
87+
Users of the xref:spark-k8s:index.adoc[Spark-k8s operator] have a variety of ways to manage SparkApplication dependencies, one of which is to xref:spark-k8s:usage.adoc#_pyspark_externally_located_dataset_artifact_available_via_pvcvolume_mount[mount resources on a PersistentVolumeClaim]. An example is shown https://github.com/stackabletech/spark-k8s-operator/blob/main/examples/ny-tlc-report.yaml[here].
8888

8989
== Further reading
9090

modules/concepts/pages/s3.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// -------------- Intro ----------------
44

55
Many of the tools on the Stackable platform integrate with S3 storage in some way.
6-
For example Druid can xref:druid::usage.adoc#_s3_for_ingestion[ingest data from S3] and also xref:druid::usage.adoc##_s3_deep_storage[use S3 as a backend for deep storage], Spark can use an xref:spark-k8s::usage.adoc#_s3_bucket_specification[S3 bucket] to store application files and data.
6+
For example Druid can xref:druid:usage.adoc#_s3_for_ingestion[ingest data from S3] and also xref:druid:usage.adoc##_s3_deep_storage[use S3 as a backend for deep storage], Spark can use an xref:spark-k8s:usage.adoc#_s3_bucket_specification[S3 bucket] to store application files and data.
77

88
== S3Connection and S3Bucket
99
// introducing the objects

modules/concepts/pages/service_discovery.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ data:
3737
ZOOKEEPER: simple-zk-server-default-0.simple-zk-server-default.default.svc.cluster.local:2181,simple-zk-server-default-1.simple-zk-server-default.default.svc.cluster.local:2181
3838
----
3939

40-
The information needed to connect can be a string like above, for example a JDBC connect string: `jdbc:postgresql://localhost:12345`. But a ConfigMap can also contain multiple configuration files which can then be mounted into a client Pod. This is the case for xref:hdfs::discovery.adoc[HDFS], where the `core-site.xml` and `hdfs-site.xml` files are put into the discovery ConfigMap.
40+
The information needed to connect can be a string like above, for example a JDBC connect string: `jdbc:postgresql://localhost:12345`. But a ConfigMap can also contain multiple configuration files which can then be mounted into a client Pod. This is the case for xref:hdfs:discovery.adoc[HDFS], where the `core-site.xml` and `hdfs-site.xml` files are put into the discovery ConfigMap.
4141

4242
== Usage of the service discovery ConfigMap
4343

@@ -79,7 +79,7 @@ In general, use the name of the product instance to retrieve the ConfigMap and u
7979

8080
=== Discovering services outside Stackable
8181

82-
It is not uncommon to already have some core software running in your stack, such as HDFS. If you want to use HBase with the Stackable operator, you can still connect your already existing HDFS instance. You will have to create the discovery ConfigMap for your already existing HDFS yourself. Looking at xref:hdfs::discovery.adoc[the discovery documentation for HDFS], you can see that the discovery ConfigMap for HDFS contains the `core-site.xml` and `hdfs-site.xml` files.
82+
It is not uncommon to already have some core software running in your stack, such as HDFS. If you want to use HBase with the Stackable operator, you can still connect your already existing HDFS instance. You will have to create the discovery ConfigMap for your already existing HDFS yourself. Looking at xref:hdfs:discovery.adoc[the discovery documentation for HDFS], you can see that the discovery ConfigMap for HDFS contains the `core-site.xml` and `hdfs-site.xml` files.
8383

8484
The ConfigMap should look something like this:
8585

@@ -114,9 +114,9 @@ spec:
114114

115115
Consult discovery ConfigMap documentation for specific products:
116116

117-
* xref:druid::discovery.adoc[Apache Druid]
118-
* xref:hdfs::discovery.adoc[Apache Hadoop HDFS]
119-
* xref:hive::discovery.adoc[Apache Hive]
120-
* xref:kafka::discovery.adoc[Apache Kafka]
121-
* xref:opa::discovery.adoc[OPA]
122-
* xref:zookeeper::discovery.adoc[Apache ZooKeeper]
117+
* xref:druid:discovery.adoc[Apache Druid]
118+
* xref:hdfs:discovery.adoc[Apache Hadoop HDFS]
119+
* xref:hive:discovery.adoc[Apache Hive]
120+
* xref:kafka:discovery.adoc[Apache Kafka]
121+
* xref:opa:discovery.adoc[OPA]
122+
* xref:zookeeper:discovery.adoc[Apache ZooKeeper]

modules/concepts/pages/tls_server_verification.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ include::example$tls-server-verification-webpki.yaml[]
3737
----
3838

3939
This example will use TLS and verify the server using the provided ca certificate.
40-
For this to work you need to create a xref:secret-operator::secretclass.adoc[] that - at least - contains the ca certificate.
40+
For this to work you need to create a xref:secret-operator:secretclass.adoc[] that - at least - contains the ca certificate.
4141
Note that a SecretClass does not need to have a key but can also work with just a ca cert.
4242
So if you were provided with a ca cert but do not have access to the key you can still use this method.
4343

@@ -48,8 +48,8 @@ include::example$tls-server-verification-custom-ca.yaml[]
4848

4949
=== Mutual verification
5050
This example will use TLS and verify both - the server and the client using certificates.
51-
For this to work you need to create a xref:secret-operator::secretclass.adoc[] containing the ca certificate and a key to create new client-certificates.
52-
The xref:secret-operator::index.adoc[] will automatically provide the product with a `ca.crt`, `tls.crt` and `tls.key` so that the product can authenticate the server and it can authenticate itself at the server.
51+
For this to work you need to create a xref:secret-operator:secretclass.adoc[] containing the ca certificate and a key to create new client-certificates.
52+
The xref:secret-operator:index.adoc[] will automatically provide the product with a `ca.crt`, `tls.crt` and `tls.key` so that the product can authenticate the server and it can authenticate itself at the server.
5353

5454
[source,yaml]
5555
----

modules/reference/pages/authenticationclass.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ include::example$authenticationclass-ldap-full.yaml[]
1515
<3> The searchBase where the users should be searched
1616
<4> Additional filter that filters the allowed users
1717
<5> The name of the corresponding field names in the LDAP objects
18-
<6> The name of the xref:secret-operator::secretclass.adoc[] providing the bind credentials (username and password)
19-
<7> The xref:secret-operator::scope.adoc[] of the xref:secret-operator::secretclass.adoc[]
18+
<6> The name of the xref:secret-operator:secretclass.adoc[] providing the bind credentials (username and password)
19+
<7> The xref:secret-operator:scope.adoc[] of the xref:secret-operator:secretclass.adoc[]
2020
<8> xref:concepts:tls_server_verification.adoc[] of the LDAP server
2121

2222
To learn more, you can follow the xref:tutorials:authentication_with_openldap.adoc[] tutorial.

modules/tutorials/pages/authentication_with_openldap.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -91,8 +91,8 @@ Notice the SecretClass annotation. Create the SecretClass next:
9191
[source,yaml]
9292
include::example$ldap-auth/bind-credentials-secretclass.yaml[]
9393

94-
<1> The name of the xref:secret-operator::secretclass.adoc[] we are creating that is referred to by the Secret
95-
<2> This determines the namespace in which the referenced `Secret` will be looked for. In this case it searches for a `Secret` in the same namespace as the product runs in. See xref:secret-operator::secretclass.adoc#backend-k8ssearch[the documentation of SecretClass]
94+
<1> The name of the xref:secret-operator:secretclass.adoc[] we are creating that is referred to by the Secret
95+
<2> This determines the namespace in which the referenced `Secret` will be looked for. In this case it searches for a `Secret` in the same namespace as the product runs in. See xref:secret-operator:secretclass.adoc#backend-k8ssearch[the documentation of SecretClass]
9696

9797
// [source,bash]
9898
// include::example$ldap-auth/30-install-openldap.sh[tag=apply-credentials-secretclass]
@@ -288,7 +288,7 @@ Again, like with Superset, connect to Trino now (make sure that the StatefulSets
288288

289289
This is a bonus step, and if you want you can skip straight to the next section: <<trino_try_it>>
290290

291-
This step is not required for _authentication_ by itself. But the demo stack you installed comes with an _authorization_ configuration for Trino as well. Authorization on the platform is done using the xref:opa::index.adoc[].
291+
This step is not required for _authentication_ by itself. But the demo stack you installed comes with an _authorization_ configuration for Trino as well. Authorization on the platform is done using the xref:opa:index.adoc[].
292292

293293
Fetch the snippet as before:
294294

@@ -335,6 +335,6 @@ The LDAP connection details only need to be written down once, in the Authentica
335335

336336
- xref:concepts:authentication.adoc[Authentication concepts page]
337337
- xref:reference:authenticationclass.adoc[AuthenticationClass reference]
338-
- xref:superset:getting_started:index.adoc[Getting started with the Stackable Operator for Apache Superset]
339-
- xref:trino:getting_started:index.adoc[Getting started with the Stackable Operator for Trino]
338+
- xref:superset:getting_started/index.adoc[Getting started with the Stackable Operator for Apache Superset]
339+
- xref:trino:getting_started/index.adoc[Getting started with the Stackable Operator for Trino]
340340
// TODO Operator docs for LDAP

0 commit comments

Comments
 (0)