Skip to content

Commit 3f3e171

Browse files
committed
pinned xrefs
1 parent cbe67ae commit 3f3e171

File tree

6 files changed

+41
-41
lines changed

6 files changed

+41
-41
lines changed

modules/concepts/pages/opa.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Policy requests are made to a REST API, which allows easy requests from microser
1515

1616
== How it works
1717
// How it is deployed
18-
OPA is run by the xref:opa::index.adoc[Stackable OPA operator]. OPA is deployed with the OpaCluster resource, from which the operator creates a DaemonSet to run an OPA instance on every node of the cluster. Because of this, every Pod making policy requests will always make the request locally, minimizing latency and network traffic.
18+
OPA is run by the xref:0.10@opa::index.adoc[Stackable OPA operator]. OPA is deployed with the OpaCluster resource, from which the operator creates a DaemonSet to run an OPA instance on every node of the cluster. Because of this, every Pod making policy requests will always make the request locally, minimizing latency and network traffic.
1919

2020
=== Define policies
2121

@@ -79,10 +79,10 @@ The automatic connection is facilitated by the xref:service_discovery.adoc[servi
7979

8080
== Further reading
8181

82-
Read more about the xref:opa::index.adoc[]. Read more about product integration with OPA for these products:
82+
Read more about the xref:0.10@opa::index.adoc[]. Read more about product integration with OPA for these products:
8383

8484
* xref:0.6@trino::usage.adoc#_authorization[Trino]
85-
* xref:kafka::usage.adoc[Kafka]
86-
* xref:druid::usage.adoc#_using_open_policy_agent_opa_for_authorization[Druid]
85+
* xref:0.7@kafka::usage.adoc[Kafka]
86+
* xref:0.7@druid::usage.adoc#_using_open_policy_agent_opa_for_authorization[Druid]
8787

8888
You can also have a look at the xref:contributor:opa_configuration.adoc[implementation guidelines for OPA authorizers] or learn more about the xref:service_discovery.adoc[service discovery mechanism] used across the platform.

modules/concepts/pages/pvc.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,10 +81,10 @@ Managed Kubernetes clusters will normally have a default storage implementation
8181
== Operator usage
8282

8383
=== Airflow
84-
The xref:airflow::index.adoc[Airflow operator] can use a xref:airflow::usage.adoc#_via_persistentvolumeclaim[PersistentVolumeClaim for externally-loaded airflow jobs], known as Direct Acyclic Graphs (DAGs), as shown in this https://github.com/stackabletech/airflow-operator/blob/main/examples/simple-airflow-cluster-dags-pvc.yaml[example]. Airflow expects that all components (webserver, scheduler and workers) have access to the DAG folder, so the PersistentVolumeClaim either has to be accessible with the `ReadWriteMany` access mode or node selection should be declared to ensure all components run on the same node.
84+
The xref:0.5@airflow::index.adoc[Airflow operator] can use a xref:0.5@airflow::usage.adoc#_via_persistentvolumeclaim[PersistentVolumeClaim for externally-loaded airflow jobs], known as Direct Acyclic Graphs (DAGs), as shown in this https://github.com/stackabletech/airflow-operator/blob/main/examples/simple-airflow-cluster-dags-pvc.yaml[example]. Airflow expects that all components (webserver, scheduler and workers) have access to the DAG folder, so the PersistentVolumeClaim either has to be accessible with the `ReadWriteMany` access mode or node selection should be declared to ensure all components run on the same node.
8585

8686
=== Spark-k8s
87-
Users of the xref:spark-k8s::index.adoc[Spark-k8s operator] have a variety of ways to manage SparkApplication dependencies, one of which is to xref:spark-k8s::usage.adoc#_pyspark_externally_located_dataset_artifact_available_via_pvcvolume_mount[mount resources on a PersistentVolumeClaim]. An example is shown https://github.com/stackabletech/spark-k8s-operator/blob/main/examples/ny-tlc-report.yaml[here].
87+
Users of the xref:0.5@spark-k8s::index.adoc[Spark-k8s operator] have a variety of ways to manage SparkApplication dependencies, one of which is to xref:0.5@spark-k8s::usage.adoc#_pyspark_externally_located_dataset_artifact_available_via_pvcvolume_mount[mount resources on a PersistentVolumeClaim]. An example is shown https://github.com/stackabletech/spark-k8s-operator/blob/main/examples/ny-tlc-report.yaml[here].
8888

8989
== Further reading
9090

modules/concepts/pages/s3.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// -------------- Intro ----------------
44

55
Many of the tools on the Stackable platform integrate with S3 storage in some way.
6-
For example Druid can xref:druid::usage.adoc#_s3_for_ingestion[ingest data from S3] and also xref:druid::usage.adoc##_s3_deep_storage[use S3 as a backend for deep storage], Spark can use an xref:spark-k8s::usage.adoc#_s3_bucket_specification[S3 bucket] to store application files and data.
6+
For example Druid can xref:0.7@druid::usage.adoc#_s3_for_ingestion[ingest data from S3] and also xref:0.7@druid::usage.adoc##_s3_deep_storage[use S3 as a backend for deep storage], Spark can use an xref:0.5@spark-k8s::usage.adoc#_s3_bucket_specification[S3 bucket] to store application files and data.
77

88
== S3Connection and S3Bucket
99
// introducing the objects

modules/concepts/pages/service_discovery.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -114,9 +114,9 @@ spec:
114114

115115
Consult discovery ConfigMap documentation for specific products:
116116

117-
* xref:druid::discovery.adoc[Apache Druid]
118-
* xref:hdfs::discovery.adoc[Apache Hadoop HDFS]
119-
* xref:hive::discovery.adoc[Apache Hive]
120-
* xref:kafka::discovery.adoc[Apache Kafka]
121-
* xref:opa::discovery.adoc[OPA]
122-
* xref:zookeeper::discovery.adoc[Apache ZooKeeper]
117+
* xref:0.7@druid::discovery.adoc[Apache Druid]
118+
* xref:0.5@hdfs::discovery.adoc[Apache Hadoop HDFS]
119+
* xref:0.7@hive::discovery.adoc[Apache Hive]
120+
* xref:0.7@kafka::discovery.adoc[Apache Kafka]
121+
* xref:0.10@opa::discovery.adoc[OPA]
122+
* xref:0.11@zookeeper::discovery.adoc[Apache ZooKeeper]

modules/operators/pages/supported_versions.adoc

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,48 +4,48 @@ The latest versions of each of the Stackable Operators support the following pro
44

55
== Apache Airflow
66

7-
include::airflow::partial$supported-versions.adoc[]
7+
include::0.5@airflow::partial$supported-versions.adoc[]
88

99
== Apache Druid
1010

11-
include::druid::partial$supported-versions.adoc[]
11+
include::0.7@druid::partial$supported-versions.adoc[]
1212

1313
== Apache HBase
1414

15-
include::hbase::partial$supported-versions.adoc[]
15+
include::0.4@hbase::partial$supported-versions.adoc[]
1616

1717
== Apache HDFS
1818

19-
include::hdfs::partial$supported-versions.adoc[]
19+
include::0.5@hdfs::partial$supported-versions.adoc[]
2020

2121
== Apache Hive
2222

23-
include::hive::partial$supported-versions.adoc[]
23+
include::0.7@hive::partial$supported-versions.adoc[]
2424

2525
== Apache Kafka
2626

27-
include::kafka::partial$supported-versions.adoc[]
27+
include::0.7@kafka::partial$supported-versions.adoc[]
2828

2929
== Apache NiFi
3030

31-
include::nifi::partial$supported-versions.adoc[]
31+
include::0.7@nifi::partial$supported-versions.adoc[]
3232

3333
== Open Policy Agent (OPA)
3434

35-
include::opa::partial$supported-versions.adoc[]
35+
include::0.10@opa::partial$supported-versions.adoc[]
3636

3737
== Apache Spark on Kubernetes
3838

39-
include::spark-k8s::partial$supported-versions.adoc[]
39+
include::0.5@spark-k8s::partial$supported-versions.adoc[]
4040

4141
== Apache Superset
4242

43-
include::superset::partial$supported-versions.adoc[]
43+
include::0.6@superset::partial$supported-versions.adoc[]
4444

45-
== Apache ZooKeeper
45+
== Trino
4646

47-
include::zookeeper::partial$supported-versions.adoc[]
47+
include::0.6@trino::partial$supported-versions.adoc[]
4848

49-
== Trino
49+
== Apache ZooKeeper
5050

51-
include::trino::partial$supported-versions.adoc[]
51+
include::0.11@zookeeper::partial$supported-versions.adoc[]
Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
1-
** xref:airflow::index.adoc[Apache Airflow]
2-
** xref:druid::index.adoc[Apache Druid]
3-
** xref:hbase::index.adoc[Apache HBase]
4-
** xref:hdfs::index.adoc[Apache Hadoop HDFS]
5-
** xref:hive::index.adoc[Apache Hive]
6-
** xref:kafka::index.adoc[Apache Kafka]
7-
** xref:nifi::index.adoc[Apache NiFi]
8-
** xref:spark-k8s::index.adoc[Apache Spark on K8S]
9-
** xref:superset::index.adoc[Apache Superset]
10-
** xref:trino::index.adoc[Trino]
11-
** xref:zookeeper::index.adoc[Apache ZooKeeper]
12-
** xref:opa::index.adoc[OpenPolicyAgent]
13-
** xref:commons-operator::index.adoc[Commons]
14-
** xref:secret-operator::index.adoc[Secret]
1+
** xref:0.5@airflow::index.adoc[Apache Airflow]
2+
** xref:0.7@druid::index.adoc[Apache Druid]
3+
** xref:0.4@hbase::index.adoc[Apache HBase]
4+
** xref:0.5@hdfs::index.adoc[Apache Hadoop HDFS]
5+
** xref:0.7@hive::index.adoc[Apache Hive]
6+
** xref:0.7@kafka::index.adoc[Apache Kafka]
7+
** xref:0.7@nifi::index.adoc[Apache NiFi]
8+
** xref:0.5@spark-k8s::index.adoc[Apache Spark on K8S]
9+
** xref:0.6@superset::index.adoc[Apache Superset]
10+
** xref:0.6@trino::index.adoc[Trino]
11+
** xref:0.11@zookeeper::index.adoc[Apache ZooKeeper]
12+
** xref:0.10@opa::index.adoc[OpenPolicyAgent]
13+
** xref:0.3@commons-operator::index.adoc[Commons]
14+
** xref:0.5@secret-operator::index.adoc[Secret]

0 commit comments

Comments
 (0)