Skip to content

Commit c5179e3

Browse files
committed
Example with HDFS
1 parent 6b2d8d6 commit c5179e3

File tree

1 file changed

+14
-17
lines changed

1 file changed

+14
-17
lines changed

modules/concepts/pages/operations/cluster_operations.adoc

Lines changed: 14 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -28,45 +28,42 @@ Sometimes it is necessary to restart services deployed in Kubernetes. A service
2828

2929
Most operators create StatefulSet objects for the products they manage and Kubernetes offers a rollout mechanism to restart them. You can use `kubectl rollout restart statefulset` to restart a StatefulSet previously created by an operator.
3030

31-
To illustrate how to use the command line to restart one or more Pods, we will assume you used the Stackable Airflow Operator to deploy an Airflow stacklet called `myairflow`.
31+
To illustrate how to use the command line to restart one or more Pods, we will assume you used the Stackable HDFS Operator to deploy an HDFS stacklet called `dumbo`.
3232

33-
This stacklet will consist, among other things, of three StatefulSets created for each Airflow role: `scheduler`, `webserver` and `worker`. Let's list them:
33+
This stacklet will consist, among other things, of three StatefulSets created for each HDFS role: `namenode`, `datanode` and `journalnode`. Let's list them:
3434

3535
[source,shell]
3636
----
37-
❯ kubectl get sts
37+
❯ kubectl get sts -l app.kubernetes.io/instance=dumbo
3838
NAME READY AGE
39-
myairflow-scheduler-default 1/1 61m
40-
myairflow-webserver-default 1/1 61m
41-
myairflow-worker-default 2/2 61m
42-
postgresql-airflow 1/1 64m
43-
redis-airflow-master 1/1 64m
44-
redis-airflow-replicas 1/1 64m
39+
dumbo-datanode-default 2/2 4m41s
40+
dumbo-journalnode-default 1/1 4m41s
41+
dumbo-namenode-default 2/2 4m41s
4542
----
4643

47-
To restart the Airflow scheduler Pod, run:
44+
To restart the HDFS data node Pods, run:
4845

4946
[source,shell]
5047
----
51-
❯ kubectl rollout restart statefulset myairflow-scheduler-default
52-
statefulset.apps/myairflow-scheduler-default restarted
48+
❯ kubectl rollout restart statefulset dumbo-datanode-default
49+
statefulset.apps/dumbo-datanode-default restarted
5350
----
5451

55-
Sometimes you want to restart all Pods of stacklet and not just individual roles. This can be achieved in a similar manner by using labels instead of StatefulSet names. Continuing with the example above, to restart all Airflow Pods you would have to run:
52+
Sometimes you want to restart all Pods of a stacklet and not just individual roles. This can be achieved in a similar manner by using labels instead of StatefulSet names. Continuing with the example above, to restart all HDFS Pods you would have to run:
5653

5754
[source,shell]
5855
----
59-
❯ kubectl rollout restart statefulset --selector app.kubernetes.io/instance=myairflow
56+
❯ kubectl rollout restart statefulset --selector app.kubernetes.io/instance=dumbo
6057
----
6158

62-
To wait for all Pods to be running again you run:
59+
To wait for all Pods to be running again:
6360

6461
[source,shell]
6562
----
66-
❯ kubectl rollout status statefulset --selector app.kubernetes.io/instance=myairflow
63+
❯ kubectl rollout status statefulset --selector app.kubernetes.io/instance=dumbo
6764
----
6865

69-
Here we used the label `app.kubernetes.io/instance=myairflow` to select all Pods that belong to a specific Airflow stacklet. This label is created by the operator and `myairflow` is the name of the Airflow stacklet as specified in the custom resource. You can add more labels to make finer grained restarts.
66+
Here we used the label `app.kubernetes.io/instance=dumbo` to select all Pods that belong to a specific HDFS stacklet. This label is created by the operator and `dumbo` is the name of the HDFS stacklet as specified in the custom resource. You can add more labels to make finer grained restarts.
7067

7168
== Automatic Restarts
7269

0 commit comments

Comments
 (0)