diff --git a/docs/documentation/architecture-and-internals.md b/docs/documentation/architecture-and-internals.md index e04d72cd66..4f84500e9c 100644 --- a/docs/documentation/architecture-and-internals.md +++ b/docs/documentation/architecture-and-internals.md @@ -7,48 +7,62 @@ permalink: /docs/architecture-and-internals # Architecture and Internals -This document gives an overview of the internal structure and components of Java Operator SDK core, in order to make it -easier for developers to understand and contribute to it. However, this is just an extract of the backbone of the core -module, but other parts should be fairly easy to understand. We will maintain this document on developer feedback. +This document gives an overview of the internal structure and components of Java Operator SDK core, +in order to make it easier for developers to understand and contribute to it. This document is +not intended to be a comprehensive reference, rather an introduction to the core concepts and we +hope that the other parts should be fairly easy to understand. We will evolve this document +based on the community's feedback. ## The Big Picture and Core Components -![Alt text for broken image link](../assets/images/architecture.svg) +![JOSDK architecture](../assets/images/architecture.svg) -[Operator](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/Operator.java) +An [Operator](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/Operator.java) is a set of independent [controllers](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/Controller.java) -. Controller however, is an internal class managed by the framework itself. It encapsulates directly or indirectly all -the processing units for a single custom resource. Other components: +. +The `Controller` class, however, is an internal class managed by the framework itself and +usually shouldn't interacted with directly by end users. It +manages all the processing units involved with reconciling a single type of Kubernetes resource. +Other components include: + +- [Reconciler](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Reconciler.java) + is the primary entry-point for the developers of the framework to implement the reconciliation + logic. +- [EventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/EventSource.java) + represents a source of events that might eventually trigger a reconciliation. - [EventSourceManager](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/EventSourceManager.java) - aggregates all the event sources regarding a controller. Provides starts and stops the event sources. + aggregates all the event sources associated with a controller. Manages the event sources' + lifecycle. - [ControllerResourceEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/controller/ControllerResourceEventSource.java) - is a central event source that watches the controller related custom resource for changes, propagates events and - caches the state of the custom resources. In the background from V2 it uses Informers. + is a central event source that watches the resources associated with the controller (also + called primary resources) for changes, propagates events and caches the related state. - [EventProcessor](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/EventProcessor.java) - processes the incoming events. Implements execution serialization. Manages the executor service for execution. Also - implements the post-processing of after the reconciler was executed, like re-schedules and retries of events. + processes the incoming events and makes sure they are executed in a sequential manner, that is + making sure that the events are processed in the order they are received for a given resource, + despite requests being processed concurrently overall. The `EventProcessor` also takes care of + re-scheduling or retrying requests as needed. - [ReconcilerDispatcher](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/ReconciliationDispatcher.java) - is responsible for managing logic around reconciler execution, deciding which method should be called of the - reconciler, managing the result - (UpdateControl and DeleteControl), making the instructed Kubernetes API calls. -- [Reconciler](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Reconciler.java) - is the primary entry-point for the developers of the framework to implement the reconciliation logic. + is responsible for dispatching requests to the appropriate `Reconciler` method and handling + the reconciliation results, making the instructed Kubernetes API calls. ## Typical Workflow A typical workflows looks like following: -1. An EventSource produces and event, that is propagated to the event processor. -2. In the event processor the related `CustomResource` is read from the cache based on the `ResourceID` in the event. -3. If there is no other execution running for the custom resource, an execution is submitted for the executor (thread - pool) . -4. Executor calls ReconcilerDispatcher which decides which method to execute of the reconciler. Let's say in this case it - was `reconcile(...)` -5. After reconciler execution the Dispatcher calls Kubernetes API server, since the `reconcile` method returned - with `UpdateControl.patchStatus(...)` result. -6. Now the dispatcher finishes the execution and calls back `EventProcessor` to finalize the execution. -7. EventProcessor checks if there is no `reschedule` or `retry` required and if there are no subsequent events received - for the custom resource -8. Neither of this happened, therefore the event execution finished. +1. An `EventSource` produces an event, that is propagated to the `EventProcessor`. +2. The resource associated with the event is read from the internal cache. +3. If the resource is not already being processed, a reconciliation request is + submitted to the executor service to be executed in a different thread, encapsulated in a + `ControllerExecution` instance. +4. This, in turns, calls the `ReconcilerDispatcher` which dispatches the call to the appropriate + `Reconciler` method, passing along all the required information. +5. Once the `Reconciler` is done, what happens depends on the result returned by the + `Reconciler`. If needed, the `ReconcilerDispatcher` will make the appropriate calls to the + Kubernetes API server. +6. Once the `Reconciler` is done, the `EventProcessor` is called back to finalize the + execution and update the controller's state. +7. The `EventProcessor` checks if the request needs to be rescheduled or retried and if there are no + subsequent events received for the same resource. +8. When none of this happens, the processing of the event is finished. diff --git a/docs/documentation/contributing.md b/docs/documentation/contributing.md index 8e9f95fdcd..dfc16f5e7a 100644 --- a/docs/documentation/contributing.md +++ b/docs/documentation/contributing.md @@ -4,10 +4,12 @@ description: Contributing To Java Operator SDK layout: docs permalink: /docs/contributing --- + # Contributing To Java Operator SDK -Firstly, big thanks for considering contributing to the project. We really hope to make this into a -community project and to do that we need your help! +First of all, we'd like to thank you for considering contributing to the project! We really +hope to create a vibrant community around this project but this won't happen without help from +people like you! ## Code of Conduct @@ -16,21 +18,24 @@ aggressive or insulting behaviour. To this end, the project and everyone participating in it is bound by the [Code of Conduct]({{baseurl}}/coc). By participating, you are expected to uphold this code. Please report -unacceptable behaviour to any of the project admins or adam.sandor@container-solutions.com. +unacceptable behaviour to any of the project admins. ## Bugs -If you find a bug, please [open an issue](https://github.com/java-operator-sdk/java-operator-sdk/issues)! Do try +If you find a bug, +please [open an issue](https://github.com/java-operator-sdk/java-operator-sdk/issues)! Do try to include all the details needed to recreate your problem. This is likely to include: - - The version of the Operator SDK being used - - The exact platform and version of the platform that you're running on - - The steps taken to cause the bug +- The version of the Operator SDK being used +- The exact platform and version of the platform that you're running on +- The steps taken to cause the bug +- Reproducer code is also very welcome to help us diagnose the issue and fix it quickly ## Building Features and Documentation If you're looking for something to work on, take look at the issue tracker, in particular any items -labelled [good first issue](https://github.com/java-operator-sdk/java-operator-sdk/labels/good%20first%20issue). +labelled [good first issue](https://github.com/java-operator-sdk/java-operator-sdk/labels/good%20first%20issue) +. Please leave a comment on the issue to mention that you have started work, in order to avoid multiple people working on the same issue. @@ -42,18 +47,19 @@ discussing it first to avoid wasting effort. We do commit to listening to all pr our best to work something out! Once you've got the go ahead to work on a feature, you can start work. Feel free to communicate with -team via updates on the issue tracker or the [Discord channel](https://discord.gg/DacEhAy) and ask for feedback, pointers etc. -Once you're happy with your code, go ahead and open a Pull Request. +team via updates on the issue tracker or the [Discord channel](https://discord.gg/DacEhAy) and ask +for feedback, pointers etc. Once you're happy with your code, go ahead and open a Pull Request. ## Pull Request Process -First, please format your commit messages so that they follow the [conventional commit](https://www.conventionalcommits.org/en/v1.0.0/) format. +First, please format your commit messages so that they follow +the [conventional commit](https://www.conventionalcommits.org/en/v1.0.0/) format. On opening a PR, a GitHub action will execute the test suite against the new code. All code is -required to pass the tests, and new code must be accompanied by new tests. +required to pass the tests, and new code must be accompanied by new tests. -All PRs have to be reviewed and signed off by another developer before being merged to the master -branch. This review will likely ask for some changes to the code - please don't be alarmed or upset +All PRs have to be reviewed and signed off by another developer before being merged. This review +will likely ask for some changes to the code - please don't be alarmed or upset at this; it is expected that all PRs will need tweaks and a normal part of the process. The PRs are checked to be compliant with the Java Google code style. @@ -64,12 +70,15 @@ Be aware that all Operator SDK code is released under the [Apache 2.0 licence](L ### Code style -The SDK modules and samples are formatted to follow the Java Google code style. -On every `compile` the code gets formatted automatically, -however, to make things simpler (i.e. avoid getting a PR rejected simply because of code style issues), you can import one of the following code style schemes based on the IDE you use: +The SDK modules and samples are formatted to follow the Java Google code style. +On every `compile` the code gets formatted automatically, however, to make things simpler (i.e. +avoid getting a PR rejected simply because of code style issues), you can import one of the +following code style schemes based on the IDE you use: -- for *Intellij IDEA* import [contributing/intellij-google-style.xml](contributing/intellij-google-style.xml) -- for *Eclipse* import [contributing/eclipse-google-style.xml](contributing/eclipse-google-style.xml) +- for *Intellij IDEA* + import [contributing/intellij-google-style.xml](contributing/intellij-google-style.xml) +- for *Eclipse* + import [contributing/eclipse-google-style.xml](contributing/eclipse-google-style.xml) ## Thanks diff --git a/docs/documentation/dependent-resources.md b/docs/documentation/dependent-resources.md index 5296ea245a..e1d073d1ee 100644 --- a/docs/documentation/dependent-resources.md +++ b/docs/documentation/dependent-resources.md @@ -1,26 +1,27 @@ --- -title: Dependent Resources Feature -description: Dependent Resources Feature -layout: docs +title: Dependent Resources Feature +description: Dependent Resources Feature +layout: docs permalink: /docs/dependent-resources --- # Dependent Resources -DISCLAIMER: The Dependent Resource support is relatively new feature, while we strove to cover what we anticipate will -be the most common use cases, the implementation is not simple and might still evolve. As a result, some APIs could be a -subject of change in the future. However, non-backwards compatible changes are expected to be trivial to migrate to. +DISCLAIMER: The Dependent Resource support is a relatively new feature, while we strove to cover +what we anticipate will be the most common use cases, the implementation is not simple and might +still evolve. As a result, some APIs could be a subject of change in the future. However, +non-backwards compatible changes are expected to be trivial to migrate to. ## Motivations and Goals -Most operators need to deal with secondary resources when trying to realize the desired state described by the primary -resource it is in charge of. For example, the Kubernetes-native -`Deployment` controller needs to manage `ReplicaSet` instances as part of a `Deployment`'s reconciliation process. In -this instance, `ReplicatSet` is considered a secondary resource for the `Deployment` controller. - -Controllers that deal with secondary resources typically need to perform the following steps, for each secondary -resource: +Most operators need to deal with secondary resources when trying to realize the desired state +described by the primary resource they are in charge of. For example, the Kubernetes-native +`Deployment` controller needs to manage `ReplicaSet` instances as part of a `Deployment`'s +reconciliation process. In this instance, `ReplicatSet` is considered a secondary resource for +the `Deployment` controller. +Controllers that deal with secondary resources typically need to perform the following steps, for +each secondary resource:
flowchart TD @@ -36,74 +37,86 @@ match -- No --> Update --> Done
-While these steps are not difficult in and of themselves, there are some subtleties that can lead to bugs or sub-optimal -code if not done right. As this process is pretty much similar for each dependent resource, it makes sense for the SDK -to offer some level of support to remove the boilerplate code of these repetitive actions. It should be possible to -handle common cases (such as dealing with Kubernetes-native secondary resources) in a semi-declarative way with only a -minimal amount of code, JOSDK taking care of wiring everything accordingly. +While these steps are not difficult in and of themselves, there are some subtleties that can lead to +bugs or sub-optimal code if not done right. As this process is pretty much similar for each +dependent resource, it makes sense for the SDK to offer some level of support to remove the +boilerplate code associated with encoding these repetitive actions. It should +be possible to handle common cases (such as dealing with Kubernetes-native secondary resources) in a +semi-declarative way with only a minimal amount of code, JOSDK taking care of wiring everything +accordingly. -Moreover, in order for your reconciler to get informed of events on these secondary resources, you need to configure and -create event sources and maintain them. JOSDK already makes it rather easy to deal with these, but dependent resources -makes it even simpler. +Moreover, in order for your reconciler to get informed of events on these secondary resources, you +need to configure and create event sources and maintain them. JOSDK already makes it rather easy +to deal with these, but dependent resources makes it even simpler. -Finally, there are also opportunities for the SDK to transparently add features that are even trickier to get right, -such as immediate caching of updated or created resources (so that your reconciler doesn't need to wait for a cluster -roundtrip to continue its work) and associated event filtering (so that something your reconciler just changed doesn't -re-trigger a reconciliation, for example). +Finally, there are also opportunities for the SDK to transparently add features that are even +trickier to get right, such as immediate caching of updated or created resources (so that your +reconciler doesn't need to wait for a cluster roundtrip to continue its work) and associated +event filtering (so that something your reconciler just changed doesn't re-trigger a +reconciliation, for example). ## Design ### `DependentResource` vs. `AbstractDependentResource` -The -new [`DependentResource`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/dependent/DependentResource.java)interface -lies at the core of the design and strives to encapsulate the logic that is required to reconcile the state of the -associated secondary resource based on the state of the primary one. For most cases, this logic will follow the flow -expressed above and JOSDK provides a very convenient implementation of this logic in the form of the -[`AbstractDependentResource`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/AbstractDependentResource.java) +The new +[`DependentResource`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/dependent/DependentResource.java) +interface lies at the core of the design and strives to encapsulate the logic that is required +to reconcile the state of the associated secondary resource based on the state of the primary +one. For most cases, this logic will follow the flow expressed above and JOSDK provides a very +convenient implementation of this logic in the form of the +[`AbstractDependentResource`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/AbstractDependentResource.java) class. If your logic doesn't fit this pattern, though, you can still provide your -own `reconcile` method implementation. While the benefits of using dependent resources are less obvious in that case, -this allows you to separate the logic necessary to deal with each secondary resource in its own class that can then be -tested in isolation via unit tests. You can also use the declarative support with your own implementations as we shall -see later on. - -`AbstractDependentResource` is designed so that classes extending it specify which functionality they support by -implementing trait interfaces. This design has been selected to express the fact that not all secondary resources are -completely under the control of the primary reconciler: some dependent resources are only ever created or updated for -example and we needed a way to let JOSDK know when that is the case. We therefore provide trait interfaces: `Creator`, -`Updater` and `Deleter` to express that the `DependentResource` implementation will provide custom functionality to -create, update and delete its associated secondary resources, respectively. If these traits are not implemented then -parts of the logic described above is never triggered: if your implementation doesn't implement `Creator`, for example, -`AbstractDependentResource` will never try to create the associated secondary resource, even if it doesn't exist. It is -possible to not implement any of these traits and therefore create read-only dependent resources that will trigger your -reconciler whenever a user interacts with them but that are never modified by your reconciler itself. +own `reconcile` method implementation. While the benefits of using dependent resources are less +obvious in that case, this allows you to separate the logic necessary to deal with each +secondary resource in its own class that can then be tested in isolation via unit tests. You can +also use the declarative support with your own implementations as we shall see later on. + +`AbstractDependentResource` is designed so that classes extending it specify which functionality +they support by implementing trait interfaces. This design has been selected to express the fact +that not all secondary resources are completely under the control of the primary reconciler: +some dependent resources are only ever created or updated for example and we needed a way to let +JOSDK know when that is the case. We therefore provide trait interfaces: `Creator`, +`Updater` and `Deleter` to express that the `DependentResource` implementation will provide custom +functionality to create, update and delete its associated secondary resources, respectively. If +these traits are not implemented then parts of the logic described above is never triggered: if +your implementation doesn't implement `Creator`, for example, `AbstractDependentResource` will +never try to create the associated secondary resource, even if it doesn't exist. It is +possible to not implement any of these traits and therefore create read-only dependent resources +that will trigger your reconciler whenever a user interacts with them but that are never +modified by your reconciler itself. ### Batteries included: convenient DependentResource implementations! JOSDK also offers several other convenient implementations building on top of `AbstractDependentResource` that you can use as starting points for your own implementations. -One such implementation is the `KubernetesDependentResource` class that makes it really easy to work with -Kubernetes-native resources. In this case, you usually only need to provide an implementation for the `desired` method -to tell JOSDK what the desired state of your secondary resource should be based on the specified primary resource state. -JOSDK takes care of everything else using default implementations that you can override in case you need more precise -control of what's going on. +One such implementation is the `KubernetesDependentResource` class that makes it really easy to work +with Kubernetes-native resources. In this case, you usually only need to provide an +implementation for the `desired` method to tell JOSDK what the desired state of your secondary +resource should be based on the specified primary resource state. + +JOSDK takes care of everything else using default implementations that you can override in case you +need more precise control of what's going on. We also provide implementations that make it very easy to cache (`AbstractCachingDependentResource`) or make it easy to poll for changes in external -resources (`PollingDependentResource`, `PerResourcePollingDependentResource`). All the provided implementations can be -found in the `io/javaoperatorsdk/operator/processing/dependent` package of the `operator-framework-core` module. +resources (`PollingDependentResource`, `PerResourcePollingDependentResource`). All the provided +implementations can be found in the `io/javaoperatorsdk/operator/processing/dependent` package of +the `operator-framework-core` module. -### Sample Kubernetes Dependent Resource +### Sample Kubernetes Dependent Resource -A typical use case, when a Kubernetes resource is fully managed - Created, Read, Updated and Deleted (or set to be garbage -collected). The following example shows how to create a `Deployment` dependent resource: +A typical use case, when a Kubernetes resource is fully managed - Created, Read, Updated and +Deleted (or set to be garbage collected). The following example shows how to create a +`Deployment` dependent resource: ```java + @KubernetesDependent(labelSelector = WebPageManagedDependentsReconciler.SELECTOR) -class DeploymentDependentResource extends CRUKubernetesDependentResource { +class DeploymentDependentResource extends CRUDKubernetesDependentResource { - public DeploymentDependentResource() { + public DeploymentDependentResource() { super(Deployment.class); } @@ -124,33 +137,34 @@ class DeploymentDependentResource extends CRUKubernetesDependentResource, ErrorStatusHandler { - // omitted code + // omitted code - @Override - public UpdateControl reconcile(WebPage webPage, Context context) - throws Exception { + @Override + public UpdateControl reconcile(WebPage webPage, Context context) + throws Exception { - final var name = context.getSecondaryResource(ConfigMap.class).orElseThrow() - .getMetadata().getName(); - webPage.setStatus(createStatus(name)); - return UpdateControl.patchStatus(webPage); - } + final var name = context.getSecondaryResource(ConfigMap.class).orElseThrow() + .getMetadata().getName(); + webPage.setStatus(createStatus(name)); + return UpdateControl.patchStatus(webPage); + } } ``` @@ -185,125 +199,143 @@ sample [here](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/s ## Standalone Dependent Resources -In case just some or sub-set of the resources are desired to be managed by dependent resources use standalone mode. -In practice this means that the developer is responsible to initializing and managing and -calling `reconcile` method. However, this gives possibility for developers to fully customize the process for -reconciliation. Use standalone dependent resources for cases when managed does not fit. +It is also possible to wire dependent resources programmatically. In practice this means that the +developer is responsible to initializing and managing and calling `reconcile` method. However, +this gives possibility for developers to fully customize the process for reconciliation. Use +standalone dependent resources for cases when managed does not fit. -Note that [Workflows](https://javaoperatorsdk.io/docs/dependent-resources) support also standalone mode using -standalone resources. +Note that [Workflows](https://javaoperatorsdk.io/docs/workflows) support also standalone +mode using standalone resources. -The sample is similar to one above it just performs additional checks, and conditionally creates an `Ingress`: +The sample is similar to one above it just performs additional checks, and conditionally creates +an `Ingress`: (Note that now this condition creation is also possible with Workflows) ```java @ControllerConfiguration public class WebPageStandaloneDependentsReconciler - implements Reconciler, ErrorStatusHandler, EventSourceInitializer { - - private KubernetesDependentResource configMapDR; - private KubernetesDependentResource deploymentDR; - private KubernetesDependentResource serviceDR; - private KubernetesDependentResource ingressDR; - - public WebPageStandaloneDependentsReconciler(KubernetesClient kubernetesClient) { - // 1. - createDependentResources(kubernetesClient); - } - - @Override - public List prepareEventSources(EventSourceContext context) { - // 2. - return List.of( - configMapDR.initEventSource(context), - deploymentDR.initEventSource(context), - serviceDR.initEventSource(context)); - } - - @Override - public UpdateControl reconcile(WebPage webPage, Context context) - throws Exception { - - // 3. - if (!isValidHtml(webPage.getHtml())) { - return UpdateControl.patchStatus(setInvalidHtmlErrorMessage(webPage)); - } - - // 4. - configMapDR.reconcile(webPage, context); - deploymentDR.reconcile(webPage, context); - serviceDR.reconcile(webPage, context); - - // 5. - if (Boolean.TRUE.equals(webPage.getSpec().getExposed())) { - ingressDR.reconcile(webPage, context); - } else { - ingressDR.delete(webPage, context); - } - - // 6. - webPage.setStatus( - createStatus(configMapDR.getResource(webPage).orElseThrow().getMetadata().getName())); - return UpdateControl.patchStatus(webPage); - } + implements Reconciler, ErrorStatusHandler, + EventSourceInitializer { + + private KubernetesDependentResource configMapDR; + private KubernetesDependentResource deploymentDR; + private KubernetesDependentResource serviceDR; + private KubernetesDependentResource ingressDR; + + public WebPageStandaloneDependentsReconciler(KubernetesClient kubernetesClient) { + // 1. + createDependentResources(kubernetesClient); + } + + @Override + public List prepareEventSources(EventSourceContext context) { + // 2. + return List.of( + configMapDR.initEventSource(context), + deploymentDR.initEventSource(context), + serviceDR.initEventSource(context)); + } + + @Override + public UpdateControl reconcile(WebPage webPage, Context context) + throws Exception { + + // 3. + if (!isValidHtml(webPage.getHtml())) { + return UpdateControl.patchStatus(setInvalidHtmlErrorMessage(webPage)); + } + + // 4. + configMapDR.reconcile(webPage, context); + deploymentDR.reconcile(webPage, context); + serviceDR.reconcile(webPage, context); + + // 5. + if (Boolean.TRUE.equals(webPage.getSpec().getExposed())) { + ingressDR.reconcile(webPage, context); + } else { + ingressDR.delete(webPage, context); + } + + // 6. + webPage.setStatus( + createStatus(configMapDR.getResource(webPage).orElseThrow().getMetadata().getName())); + return UpdateControl.patchStatus(webPage); + } private void createDependentResources(KubernetesClient client) { - this.configMapDR = new ConfigMapDependentResource(); - this.deploymentDR = new DeploymentDependentResource(); - this.serviceDR = new ServiceDependentResource(); - this.ingressDR = new IngressDependentResource(); - - Arrays.asList(configMapDR, deploymentDR, serviceDR, ingressDR).forEach(dr -> { - dr.setKubernetesClient(client); - dr.configureWith(new KubernetesDependentResourceConfig() - .setLabelSelector(DEPENDENT_RESOURCE_LABEL_SELECTOR)); - }); + this.configMapDR = new ConfigMapDependentResource(); + this.deploymentDR = new DeploymentDependentResource(); + this.serviceDR = new ServiceDependentResource(); + this.ingressDR = new IngressDependentResource(); + + Arrays.asList(configMapDR, deploymentDR, serviceDR, ingressDR).forEach(dr -> { + dr.setKubernetesClient(client); + dr.configureWith(new KubernetesDependentResourceConfig() + .setLabelSelector(DEPENDENT_RESOURCE_LABEL_SELECTOR)); + }); } - // omitted code + // omitted code } ``` There are multiple things happening here: -1. Dependent resources are explicitly created and can be access later by reference. -2. Event sources are produced by the dependent resources, but needs to be explicitly registered in this case. +1. Dependent resources are explicitly created and can be access later by reference. +2. Event sources are produced by the dependent resources, but needs to be explicitly registered in + this case by implementing + the [`EventSourceInitializer`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/EventSourceInitializer.java) + interface. 3. The input html is validated, and error message is set in case it is invalid. -4. Reconciliation is called explicitly, but here the workflow customization is fully in the hand of the developer. -5. An `Ingress` is created but only in case `exposed` flag set to true on custom resource. Tries to delete it if not. -6. Status is set in a different way, this is just an alternative way to show, that the actual state can be read using - the reference. This could be written in a same way as in the managed example. +4. Reconciliation of dependent resources is called explicitly, but here the workflow + customization is fully in the hand of the developer. +5. An `Ingress` is created but only in case `exposed` flag set to true on custom resource. Tries to + delete it if not. +6. Status is set in a different way, this is just an alternative way to show, that the actual state + can be read using the reference. This could be written in a same way as in the managed example. See the full source code of sample [here](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/sample-operators/webpage/src/main/java/io/javaoperatorsdk/operator/sample/WebPageStandaloneDependentsReconciler.java) . +## Telling JOSDK how to find which secondary resources are associated with a given primary resource -## Default `PrimaryToSecondaryMapper` And How to Override - +**TODO: this needs to be updated** [`KubernetesDependentResource`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/KubernetesDependentResource.java) -automatically maps secondary resource to a primary by owner reference. This behavior can be customized by implementing -[`PrimaryToSecondaryMapper`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/SecondaryToPrimaryMapper.java) by the dependent resource. -. +automatically maps secondary resource to a primary by owner reference. This behavior can be +customized by implementing +[`PrimaryToSecondaryMapper`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/SecondaryToPrimaryMapper.java) +by the dependent resource. -See sample in one of the integration tests [here](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework/src/test/java/io/javaoperatorsdk/operator/sample/primaryindexer/DependentPrimaryIndexerTestReconciler.java#L25-L25). +See sample in one of the integration +tests [here](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework/src/test/java/io/javaoperatorsdk/operator/sample/primaryindexer/DependentPrimaryIndexerTestReconciler.java#L25-L25) +. ## Other Dependent Resource Features ### Caching and Event Handling in [KubernetesDependentResource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/KubernetesDependentResource.java) -1. When a Kubernetes resource is created or updated the related informer (more precisely the `InformerEventSource`), -eventually will receive an event and will cache the up-to-date resource. However, there might be a small time window -when calling the `getResource()` of the dependent resource or getting the resource from the `EventSource` itself won't -return the fresh resource, since it's not received from the Kubernetes API. `KubernetesDependentResource` implementation -makes sure that it or the related `InformerEventSource` always return the up-to-date resource. - -2. Another feature of `KubernetesDependentResource` is to make sure that is a resource is created or updated during -the reconciliation, the later received related event will not trigger the reconciliation again. This is a small -optimization. For example if during a reconciliation a `ConfigMap` is updated using dependent resources, this won't -trigger a new reconciliation. It' does not need to, since the change in the `ConfigMap` is made by the reconciler, -and the fresh version is used further. To work properly, it is also required that all the changes are received only by -one event source (this is a best practice in general) - so for example if there are two config map dependents, either -there should be a shared event source between them, or a label selector on the event sources just to selecting related -events, see in [related integration test](https://github.com/java-operator-sdk/java-operator-sdk/blob/cd8d7e94f9d3f5d9f28dddbbb10f692546c22c9c/operator-framework/src/test/java/io/javaoperatorsdk/operator/sample/orderedmanageddependent/ConfigMapDependentResource1.java#L15-L15). +1. When a Kubernetes resource is created or updated the related informer (more precisely + the `InformerEventSource`), eventually will receive an event and will cache the up-to-date + resource. Typically, though, there might be a small time window when calling the + `getResource()` of the dependent resource or getting the resource from the `EventSource` + itself won't return the just updated resource, in the case where the associated event hasn't + been received from the Kubernetes API. The `KubernetesDependentResource` implementation, + however, addresses this issue so you don't have to worry about it by making sure that it or + the related `InformerEventSource` always return the up-to-date resource. + +2. Another feature of `KubernetesDependentResource` is to make sure that if a resource is created or + updated during the reconciliation, this particular change, which normally would trigger the + reconciliation again (since the resource has changed on the server), will, in fact, not + trigger the reconciliation again since we already know the state is as expected. This is a small + optimization. For example if during a reconciliation a `ConfigMap` is updated using dependent + resources, this won't trigger a new reconciliation. Such a reconciliation is indeed not + needed since the change originated from our reconciler. For this system to work properly, + though, it is required that changes are received only by one event source (this is a best + practice in general) - so for example if there are two config map dependents, either + there should be a shared event source between them, or a label selector on the event sources + to select only the relevant events, see + in [related integration test](https://github.com/java-operator-sdk/java-operator-sdk/blob/cd8d7e94f9d3f5d9f28dddbbb10f692546c22c9c/operator-framework/src/test/java/io/javaoperatorsdk/operator/sample/orderedmanageddependent/ConfigMapDependentResource1.java#L15-L15) + . diff --git a/docs/documentation/faq.md b/docs/documentation/faq.md index 20c4d168c6..e82f33dca3 100644 --- a/docs/documentation/faq.md +++ b/docs/documentation/faq.md @@ -6,15 +6,20 @@ permalink: /docs/faq --- ### Q: How can I access the events which triggered the Reconciliation? -In the v1.* version events were exposed to `Reconciler` (in v1 called `ResourceController`). This -included events (Create, Update) of the custom resource, but also events produced by Event Sources. After -long discussions also with developers of golang version (controller-runtime), we decided to remove access to -these events. We already advocated to not use events in the reconciliation logic, since events can be lost. -Instead reconcile all the resources on every execution of reconciliation. On first this might sound a little -opinionated, but there was a sound agreement between the developers that this is the way to go. + +In the v1.* version events were exposed to `Reconciler` (which was called `ResourceController` +then). This included events (Create, Update) of the custom resource, but also events produced by +Event Sources. After long discussions also with developers of golang version (controller-runtime), +we decided to remove access to these events. We already advocated to not use events in the +reconciliation logic, since events can be lost. Instead reconcile all the resources on every +execution of reconciliation. On first this might sound a little opinionated, but there was a +sound agreement between the developers that this is the way to go. ### Q: Can I re-schedule a reconciliation, possibly with a specific delay? -Yes, this can be done using [`UpdateControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/UpdateControl.java) and [`DeleteControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/DeleteControl.java) + +Yes, this can be done +using [`UpdateControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/UpdateControl.java) +and [`DeleteControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/DeleteControl.java) , see: ```java @@ -37,4 +42,5 @@ without an update: } ``` -Although you might consider using `EventSources`, to handle reconciliation triggering in a smarter way. \ No newline at end of file +Although you might consider using `EventSources`, to handle reconciliation triggering in a smarter +way. \ No newline at end of file diff --git a/docs/documentation/features.md b/docs/documentation/features.md index 27a369e4c1..727349ddd3 100644 --- a/docs/documentation/features.md +++ b/docs/documentation/features.md @@ -7,86 +7,60 @@ permalink: /docs/features # Features -Java Operator SDK is a high level framework and related tooling in order to facilitate implementation of Kubernetes -operators. The features are by default following the best practices in an opinionated way. However, feature flags and -other configuration options are provided to fine tune or turn off these features. +The Java Operator SDK (JOSDK) is a high level framework and related tooling aimed at +facilitating the implementation of Kubernetes operators. The features are by default following +the best practices in an opinionated way. However, feature flags and other configuration options +are provided to fine tune or turn off these features. ## Reconciliation Execution in a Nutshell -Reconciliation execution is always triggered by an event. Events typically come from the custom resource -(i.e. custom resource is created, updated or deleted) that the controller is watching, but also from different sources -(see event sources). When an event is received reconciliation is executed, unless there is already a reconciliation -happening for a particular custom resource. In other words it is guaranteed by the framework that no concurrent -reconciliation happens for a custom resource. +Reconciliation execution is always triggered by an event. Events typically come from a +primary resource, most of the time a custom resource, triggered by changes made to that resource +on the server (e.g. a resource is created, updated or deleted). Reconciler implementations are +associated with a given resource type and listens for such events from the Kubernetes API server +so that they can appropriately react to them. It is, however, possible for secondary sources to +trigger the reconciliation process. This usually occurs via +the [event source](#handling-related-events-with-event-sources) mechanism. -After a reconciliation ( -i.e. [Reconciler](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Reconciler.java) -called, a post-processing phase follows, where typically framework checks if: +When an event is received reconciliation is executed, unless a reconciliation is already +underway for this particular resource. In other words, the framework guarantees that no +concurrent reconciliation happens for any given resource. -- an exception was thrown during execution, if yes schedules a retry. -- there are new events received during the controller execution, if yes schedule the execution again. -- there is an instruction to re-schedule the execution for the future, if yes schedules a timer event with the specified - delay. -- if none above, the reconciliation is finished. +Once the reconciliation is done, the framework checks if: -Briefly, in the hearth of the execution is an eventing system, where events are the triggers of the reconciliation -execution. +- an exception was thrown during execution and if yes schedules a retry. +- new events were received during the controller execution, if yes schedule a new reconciliation. +- the reconcilier instructed the SDK to re-schedule a reconciliation at a later date, if yes + schedules a timer event with the specified delay. +- none of the above, the reconciliation is finished. -## Finalizer Support +In summary, the core of the SDK is implemented as an eventing system, where events trigger +reconciliation requests. -[Kubernetes finalizers](https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/) -make sure that a reconciliation happens when a custom resource is instructed to be deleted. Typical case when it's -useful, when an operator is down (pod not running). Without a finalizer the reconciliation - thus the cleanup - -i.e. [`Cleaner.cleanup(...)`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Cleaner.java#L28) -would not happen if a custom resource is deleted. - -To use finalizers the reconciler have to implement [`Cleaner

`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Cleaner.java) interface. -In other words, finalizer is added only if the `Reconciler` implements `Cleaner` interface. If not, no -finalizer is added and/or removed. - -Finalizers are automatically added by the framework as the first step, thus after a custom resource is created, but -before the first reconciliation. The finalizer is added via a separate Kubernetes API call. As a result of this update, -the finalizer will be present. The subsequent event will be received, which will trigger the first reconciliation. - -The finalizer that is automatically added will be also removed after the `cleanup` is executed on the reconciler. -However, the removal behaviour can be further customized, and can be instructed to "not remove yet" - this is useful just -in some specific corner cases, when there would be a long waiting period for some dependent resource cleanup. - -The name of the finalizers can be specified, in case it is not, a name will be generated. - -See [`@ControllerConfiguration`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/ControllerConfiguration.java) -annotation for more details. - -### When not to Use Finalizers? - -Typically, automated finalizer handling should be turned off, in case the cleanup of **all** the dependent resources is -handled by Kubernetes itself. This is handled by -Kubernetes [garbage collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#owners-dependents). -Setting the owner reference and related fields are not in the scope of the SDK, it's up to the user to have them set -properly when creating the objects. - -When automatic finalizer handling is turned off, the `Reconciler.cleanup(...)` method is not called at all. Not even in -case when a delete event received. So it does not make sense to implement this method and turn off finalizer at the same -time. - -## The [`reconcile`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Reconciler.java#L16) and [`cleanup`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Cleaner.java#L28) +## Implementing a [`Reconciler`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Reconciler.java) and/or [`Cleaner`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Cleaner.java) -The lifecycle of a custom resource can be clearly separated into two phases from the perspective of an operator. When a -custom resource is created or update, or on the other hand when the custom resource is deleted - or rather marked for -deletion in case a finalizer is used. +The lifecycle of a Kubernetes resource can be clearly separated into two phases from the +perspective of an operator depending on whether a resource is created or updated, or on the +other hand if it is marked for deletion. -This separation-related logic is automatically handled by the framework. The framework will always call `reconcile` -method, unless the custom resource is +This separation-related logic is automatically handled by the framework. The framework will always +call the `reconcile` method, unless the custom resource is [marked from deletion](https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/#how-finalizers-work) -. From the point when the custom resource is marked from deletion, only the `cleanup` method is called, of course -only if the reconciler implements the `Cleaner` interface. +. On the other, if the resource is marked from deletion and if the `Reconciler` implements the +`Cleaner` interface, only the `cleanup` method will be called. Implementing the `Cleaner` +interface allows developers to let the SDK know that they are interested in cleaning related +state (e.g. out-of-cluster resources). The SDK will therefore automatically add a finalizer +associated with your `Reconciler` so that the Kubernetes server doesn't delete your resources +before your `Reconciler` gets a chance to clean things up. +See [Finalizer support](#finalizer-support) for more details. -### Using [`UpdateControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/UpdateControl.java) and [`DeleteControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/DeleteControl.java) +### Using `UpdateControl` and `DeleteControl` These two classes are used to control the outcome or the desired behaviour after the reconciliation. -The `UpdateControl` can instruct the framework to update the status sub-resource of the resource and/or re-schedule a -reconciliation with a desired time delay. +The [`UpdateControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/UpdateControl.java) +can instruct the framework to update the status sub-resource of the resource +and/or re-schedule a reconciliation with a desired time delay: ```java @Override @@ -108,63 +82,118 @@ without an update: } ``` -Note, that it's not always desirable to always schedule a retry, rather to use `EventSources` to trigger the -reconciliation. +Note, though, that using `EventSources` should be preferred to rescheduling since the +reconciliation will then be triggered only when needed instead than on a timely basis. -Those are the typical use cases of resource updates, however in some cases there it can happen that the controller wants -to update the custom resource itself (like adding annotations) or not to do any updates, which is also supported. +Those are the typical use cases of resource updates, however in some cases there it can happen that +the controller wants to update the resource itself (for example to add annotations) or not perform +any updates, which is also supported. -It is also possible to update both the status and the custom resource with the `updateCustomResourceAndStatus` method. In -this case first the custom resource is updated then the status in two separate requests to K8S API. +It is also possible to update both the status and the resource with the +`updateResourceAndStatus` method. In this case, the resource is updated first followed by the +status, using two separate requests to the Kubernetes API. -Always update the custom resource with `UpdateControl`, not with the actual kubernetes client if possible. +You should always state your intent using `UpdateControl` and let the SDK deal with the actual +updates instead of performing these updates yourself using the actual Kubernetes client so that +the SDK can update its internal state accordingly. -On resource updates there is always an optimistic version control in place, to make sure that another update is not -overwritten (by setting `resourceVersion` ) . +Resource updates are protected using optimistic version control, to make sure that other updates +that might have occurred in the mean time on the server are not overwritten. This is ensured by +setting the `resourceVersion` field on the processed resources. -The `DeleteControl` typically instructs the framework to remove the finalizer after the dependent resource are cleaned -up in `cleanup` implementation. +[`DeleteControl`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/DeleteControl.java) +typically instructs the framework to remove the finalizer after the dependent +resource are cleaned up in `cleanup` implementation. ```java -public DeleteControl cleanup(MyCustomResource customResource, Context context) { - ... - return DeleteControl.defaultDelete(); -} +public DeleteControl cleanup(MyCustomResource customResource,Context context){ + ... + return DeleteControl.defaultDelete(); + } ``` -However, there is a possibility to not remove the finalizer, this allows to clean up the resources in a more async way, -mostly for the cases when there is a long waiting period after a delete operation is initiated. Note that in this case -you might want to either schedule a timed event to make sure -`cleanup` is executed again or use event sources to get notified about the state changes of a deleted resource. +However, it is possible to instruct the SDK to not remove the finalizer, this allows to clean up +the resources in a more asynchronous way, mostly for cases when there is a long waiting period +after a delete operation is initiated. Note that in this case you might want to either schedule +a timed event to make sure `cleanup` is executed again or use event sources to get notified +about the state changes of the deleted resource. + +### Finalizer Support + +[Kubernetes finalizers](https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/) +make sure that your `Reconciler` gets a chance to act before a resource is actually deleted +after it's been marked for deletion. Without finalizers, the resource would be deleted directly +by the Kubernetes server. + +Depending on your use case, you might or might not need to use finalizers. In particular, if +your operator doesn't need to clean any state that would not be automatically managed by the +Kubernetes cluster (e.g. external resources), you might not need to use finalizers. You should +use the +Kubernetes [garbage collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#owners-dependents) +mechanism as much as possible by setting owner references for your secondary resources so that +the cluster can automatically deleted them for you whenever the associated primary resource is +deleted. Note that setting owner references is the responsibility of the `Reconciler` +implementation, though [dependent resources](https://javaoperatorsdk.io/docs/dependent-resources) +make that process easier. + +If you do need to clean such state, you need to use finalizers so that their +presence will prevent the Kubernetes server from deleting the resource before your operator is +ready to allow it. This allows for clean up to still occur even if your operator was down when +the resources was "deleted" by a user. + +JOSDK makes cleaning resources in this fashion easier by taking care of managing finalizers +automatically for you when needed. The only thing you need to do is let the SDK know that your +operator is interested in cleaning state associated with your primary resources by having it +implement +the [`Cleaner

`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Cleaner.java) +interface. If your `Reconciler` doesn't implement the `Cleaner` interface, the SDK will consider +that you don't need to perform any clean-up when resources are deleted and will therefore not +activate finalizer support. In other words, finalizer support is added only if your `Reconciler` +implements the `Cleaner` interface. + +Finalizers are automatically added by the framework as the first step, thus after a resource +is created, but before the first reconciliation. The finalizer is added via a separate +Kubernetes API call. As a result of this update, the finalizer will then be present on the +resource. The reconciliation can then proceed as normal. + +The finalizer that is automatically added will be also removed after the `cleanup` is executed on +the reconciler. This behavior is customizable as explained +[above](#using-updatecontrol-and-deletecontrol) when we addressed the use of +`DeleteControl`. + +You can specify the name of the finalizer to use for your `Reconciler` using the +[`@ControllerConfiguration`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/ControllerConfiguration.java) +annotation. If you do not specify a finalizer name, one will be automatically generated for you. ## Automatic Observed Generation Handling -Having `.observedGeneration` value on the status of the resource is a best practice to indicate the last generation of -the resource reconciled successfully by the controller. This helps the users / administrators to check if the custom -resource was reconciled. +Having an `.observedGeneration` value on your resources' status is a best practice to +indicate the last generation of the resource which was successfully reconciled by the controller. +This helps users / administrators diagnose potential issues. In order to have this feature working: -- the **status class** (not the resource) must implement the +- the **status class** (not the resource itself) must implement the [`ObservedGenerationAware`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/ObservedGenerationAware.java) interface. See also the [`ObservedGenerationAwareStatus`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/ObservedGenerationAwareStatus.java) - which can also be extended. -- The other condition is that the `CustomResource.getStatus()` should not return `null`. So the status should be instantiated - when the object is returned using the `UpdateControl`. - -If these conditions are fulfilled and generation awareness not turned off, the observed generation is automatically set -by the framework after the `reconcile` method is called. Note that the observed generation is updated also -when `UpdateControl.noUpdate()` is returned from the reconciler. See this feature working in + convenience implementation that you can extend in your own status class implementations. +- The other condition is that the `CustomResource.getStatus()` method should not return `null`. + So the status should be instantiated when the object is returned using the `UpdateControl`. + +If these conditions are fulfilled and generation awareness is activated, the observed generation +is automatically set by the framework after the `reconcile` method is called. Note that the +observed generation is also updated even when `UpdateControl.noUpdate()` is returned from the +reconciler. See this feature at work in the [WebPage example](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/sample-operators/webpage/src/main/java/io/javaoperatorsdk/operator/sample/WebPageStatus.java#L5) . ```java public class WebPageStatus extends ObservedGenerationAwareStatus { - private String htmlConfigMap; + private String htmlConfigMap; ... } @@ -187,23 +216,24 @@ public class WebPage extends CustomResource ## Generation Awareness and Event Filtering -On an operator startup, the best practice is to reconcile all the resources. Since while operator was down, changes -might have made both to custom resource and dependent resources. +A best practice when an operator starts up is to reconcile all the associated resources because +changes might have occurred to the resources while the operator was not running. -When the first reconciliation is done successfully, the next reconciliation is triggered if either the dependent -resources are changed or the custom resource `.spec` is changed. If other fields like `.metadata` is changed on the -custom resource, the reconciliation could be skipped. This is supported out of the box, thus the reconciliation by -default is not triggered if the change to the main custom resource does not increase the `.metadata.generation` field. -Note that the increase of `.metada.generation` is handled automatically by Kubernetes. +When this first reconciliation is done successfully, the next reconciliation is triggered if either +dependent resources are changed or the primary resource `.spec` field is changed. If other fields +like `.metadata` are changed on the primary resource, the reconciliation could be skipped. This +behavior is supported out of the box and reconciliation is by default not triggered if +changes to the primary resource do not increase the `.metadata.generation` field. +Note that changes to `.metada.generation` are automatically handled by Kubernetes. -To turn off this feature set `generationAwareEventProcessing` to `false` for the `Reconciler`. +To turn off this feature, set `generationAwareEventProcessing` to `false` for the `Reconciler`. ## Support for Well Known (non-custom) Kubernetes Resources A Controller can be registered for a non-custom resource, so well known Kubernetes resources like ( -Ingress,Deployment,...). Note that automatic observed generation handling is not supported for these resources. Although -in case adding a secondary controller for well known k8s resource, probably the observed generation should be handled by -the primary controller. +`Ingress`, `Deployment`,...). Note that automatic observed generation handling is not supported +for these resources, though, in this case, the handling of the observed generation is probably +handled by the primary controller. See the [integration test](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework/src/test/java/io/javaoperatorsdk/operator/sample/deployment/DeploymentReconciler.java) @@ -222,12 +252,14 @@ public class DeploymentReconciler ## Max Interval Between Reconciliations -In case informers are all in place and reconciler is implemented correctly, there is no need for additional triggers. -However, it's a [common practice](https://github.com/java-operator-sdk/java-operator-sdk/issues/848#issuecomment-1016419966) -to have a failsafe periodic trigger in place, -just to make sure the resources are reconciled after certain time. This functionality is in place by default, there -is quite high interval (currently 10 hours) while the reconciliation is triggered. See how to override this using -the standard annotation: +When informers / event sources are properly set up, and the `Reconciler` implementation is +correct, no additional reconciliation triggers should be needed. However, it's +a [common practice](https://github.com/java-operator-sdk/java-operator-sdk/issues/848#issuecomment-1016419966) +to have a failsafe periodic trigger in place, just to make sure resources are nevertheless +reconciled after a certain amount of time. This functionality is in place by default, with a +rather high time interval (currently 10 hours) after which a reconciliation will be +automatically triggered even in the absence of other events. See how to override this using the +standard annotation: ```java @ControllerConfiguration(finalizerName = NO_FINALIZER, @@ -236,25 +268,23 @@ the standard annotation: timeUnit = TimeUnit.MILLISECONDS)) ``` -The event is not propagated in a fixed rate, rather it's scheduled after each reconciliation. So the -next reconciliation will after at most within the specified interval after last reconciliation. +The event is not propagated at a fixed rate, rather it's scheduled after each reconciliation. So the +next reconciliation will occur at most within the specified interval after the last reconciliation. -This feature can be turned off by setting `reconciliationMaxInterval` to [`Constants.NO_RECONCILIATION_MAX_INTERVAL`](https://github.com/java-operator-sdk/java-operator-sdk/blob/442e7d8718e992a36880e42bd0a5c01affaec9df/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Constants.java#L8-L8) +This feature can be turned off by setting `reconciliationMaxInterval` +to [`Constants.NO_RECONCILIATION_MAX_INTERVAL`](https://github.com/java-operator-sdk/java-operator-sdk/blob/442e7d8718e992a36880e42bd0a5c01affaec9df/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/Constants.java#L8-L8) or any non-positive number. -The automatic retries are not affected by this feature, in case of an error no schedule is set by this feature. +The automatic retries are not affected by this feature so a reconciliation will be re-triggered +on error, according to the specified retry policy, regardless of this maximum interval setting. ## Automatic Retries on Error -When an exception is thrown from a controller, the framework will schedule an automatic retry of the reconciliation. The -retry is behavior is configurable, an implementation is provided that should cover most of the use-cases, see +JOSDK will schedule an automatic retry of the reconciliation whenever an exception is thrown by +your `Reconciler`. The retry is behavior is configurable but a default implementation is provided +covering most of the typical use-cases, see [GenericRetry](https://github.com/java-operator-sdk/java-operator-sdk/blob/master/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/retry/GenericRetry.java) -. But it is possible to provide a custom implementation. - -It is possible to set a limit on the number of retries. In -the [Context](https://github.com/java-operator-sdk/java-operator-sdk/blob/master/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/Context.java) -object information is provided about the retry, particularly interesting is the `isLastAttempt`, since a different -behavior could be implemented based on this flag. Like setting an error message in the status in case of a last attempt; +. ```java GenericRetry.defaultLimitedExponentialRetry() @@ -263,38 +293,57 @@ behavior could be implemented based on this flag. Like setting an error message .setMaxAttempts(5); ``` -Event if the retry reached a limit, in case of a new event is received the reconciliation would happen again, it's just -won't be a result of a retry, but the new event. However, in case of an error happens also in this case, it won't -schedule a retry is at this point the retry limit is already reached. +You can also configure the default retry behavior using the `@GradualRetry` annotation. -A successful execution resets the retry. +It is possible to provide a custom implementation using the `retry` field of the +`@ControllerConfiguration` annotation and specifying the class of your custom implementation. +Note that this class will need to provide an accessible no-arg constructor for automated +instantiation. Additionally, your implementation can be automatically configured from an +annotation that you can provide by having your `Retry` implementation implement the +`AnnotationConfigurable` interface, parameterized with your annotation type. See the +`GenericRetry` implementation for more details. + +Information about the current retry state is accessible from +the [Context](https://github.com/java-operator-sdk/java-operator-sdk/blob/master/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/Context.java) +object. Of note, particularly interesting is the `isLastAttempt` method, which could allow your +`Reconciler` to implement a different behavior based on this status, by setting an error message +in your resource' status, for example, when attempting a last retry. + +Note, though, that reaching the retry limit won't prevent new events to be processed. New +reconciliations will happen for new events as usual. However, if an error also ocurrs that +would normally trigger a retry, the SDK won't schedule one at this point since the retry limit +is already reached. + +A successful execution resets the retry state. ### Setting Error Status After Last Retry Attempt -In order to facilitate error reporting Reconciler can implement the following -[interface](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/ErrorStatusHandler.java): +In order to facilitate error reporting, `Reconciler` can implement the +[ErrorStatusHandler](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/ErrorStatusHandler.java) +interface: ```java public interface ErrorStatusHandler

{ - ErrorStatusUpdateControl

updateErrorStatus(P resource, Context

context, Exception e); + ErrorStatusUpdateControl

updateErrorStatus(P resource, Context

context, Exception e); } ``` -The `updateErrorStatus` method is called in case an exception is thrown from the reconciler. It is also called when -there is no retry configured, just after the reconciler execution. In the first call the `RetryInfo.getAttemptCount()` -is always zero, since it is not a result of a retry -(regardless if retry is configured or not). +The `updateErrorStatus` method is called in case an exception is thrown from the `Reconciler`. It is +also called even if no retry policy is configured, just after the reconciler execution. +`RetryInfo.getAttemptCount()` is zero after the first reconciliation attempt, since it is not a +result of a retry (regardless of whether a retry policy is configured or not). -The result of the method call is used to make a status update on the custom resource. This is always a sub-resource -update request, so no update on custom resource itself (like spec of metadata) happens. Note that this update request -will also produce an event, and will result in a reconciliation if the controller is not generation aware. +`ErrorStatusUpdateControl` is used to tell the SDK what to do and how to perform the status +update on the primary resource, always performed as a status sub-resource request. Note that +this update request will also produce an event, and will result in a reconciliation if the +controller is not generation aware. -The scope of this feature is only the `reconcile` method of the reconciler, since there should not be updates on custom -resource after it is marked for deletion. +This feature is only available for the `reconcile` method of the `Reconciler` interface, since +there should not be updates to resource that have been marked for deletion. -Retry can be skipped for the cases of unrecoverable errors: +Retry can be skipped in cases of unrecoverable errors: ```java ErrorStatusUpdateControl.patchStatus(customResource).withNoRetry(); @@ -302,27 +351,27 @@ Retry can be skipped for the cases of unrecoverable errors: ### Correctness and Automatic Retries -There is a possibility to turn off the automatic retries. This is not desirable, unless there is a very specific reason. -Errors naturally happen, typically network errors can cause some temporal issues, another case is when a custom resource -is updated during the reconciliation (using `kubectl` for example), in this case if an update of the custom resource -from the controller (using `UpdateControl`) would fail on a conflict. The automatic retries covers these cases and will -result in a reconciliation, even if normally an event would not be processed as a result of a custom resource update -from previous example (like if there is no generation update as a result of the change and generation filtering is -turned on) +While it is possible to deactivate automatic retries, this is not desirable, unless for very +specific reasons. Errors naturally occur, whether it be transient network errors or conflicts +when a given resource is handled by a `Reconciler` but is modified at the same time by a user in +a different process. Automatic retries handle these cases nicely and will usually result in a +successful reconciliation. ## Retry and Rescheduling and Event Handling Common Behavior -Retry, reschedule and standard event processing forms a relatively complex system, where these functionalities are not -independent of each other. In the following we describe the behavior in this section, so it is easier to understand the -intersections: +Retry, reschedule and standard event processing form a relatively complex system, each of these +functionalities interacting with the others. In the following, we describe the interplay of +these features: -1. A successful execution resets a retry and the rescheduled executions which were present before the reconciliation. - However, a new rescheduling can be instructed from the reconciliation outcome (`UpdateControl` or `DeleteControl`). -2. In case an exception happened, and a retry is initiated, but an event received meanwhile, then reconciliation will be - executed instantly, and this execution won't count as a retry attempt. -3. If the retry limit is reached (so no more automatic retry would happen), but a new event received, the reconciliation - will still happen, but won't reset the retry, will be still marked as the last attempt in the retry info. The point - (1) still holds, but in case of an error, no retry will happen. +1. A successful execution resets a retry and the rescheduled executions which were present before + the reconciliation. However, a new rescheduling can be instructed from the reconciliation + outcome (`UpdateControl` or `DeleteControl`). +2. In case an exception happened, a retry is initiated. However, if an event is received + meanwhile, it will be reconciled instantly, and this execution won't count as a retry attempt. +3. If the retry limit is reached (so no more automatic retry would happen), but a new event + received, the reconciliation will still happen, but won't reset the retry, and will still be + marked as the last attempt in the retry info. The point (1) still holds, but in case of an + error, no retry will happen. ## Rate Limiting @@ -344,15 +393,20 @@ A default rate limiter implementation is provided, see: . Users can override it by implementing their own [`RateLimiter`](https://github.com/java-operator-sdk/java-operator-sdk/blob/ce4d996ee073ebef5715737995fc3d33f4751275/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/rate/RateLimiter.java) -. - -To configure the default rate limiter use the `@LimitingRateOverPeriod` annotation on your +and specifying this custom implementation using the `rateLimiter` field of the +`@ControllerConfiguration` annotation. Similarly to the `Retry` implementations, +`RateLimiter` implementations must provide an accessible, no-arg constructor for instantiation +purposes and can further be automatically configured from your own, provided annotation provided +your `RateLimiter` implementation also implements the `AnnotationConfigurable` interface, +parameterized by your custom annotation type. + +To configure the default rate limiter use the `@RateLimited` annotation on your `Reconciler` class. The following configuration limits each resource to reconcile at most twice within a 3 second interval: ```java -@LimitingRateOverPeriod(maxReconciliations = 2, within = 3, unit = TimeUnit.SECONDS) +@RateLimited(maxReconciliations = 2, within = 3, unit = TimeUnit.SECONDS) @ControllerConfiguration public class MyReconciler implements Reconciler { @@ -364,51 +418,65 @@ resource will happen before two seconds have elapsed. Note that, since rate is l per-resource basis, other resources can still be reconciled at the same time, as long, of course, that they stay within their own rate limits. - ## Handling Related Events with Event Sources -See also this [blog post](https://csviri.medium.com/java-operator-sdk-introduction-to-event-sources-a1aab5af4b7b). - -Event sources are a relatively simple yet powerful and extensible concept to trigger controller executions. Usually -based on changes of dependent resources. To solve the mentioned problems above, de-facto we watch resources we manage -for changes, and reconcile the state if a resource is changed. Note that resources we are watching can be Kubernetes and -also non-Kubernetes objects. Typically, in case of non-Kubernetes objects or services we can extend our operator to -handle webhooks or websockets or to react to any event coming from a service we interact with. What happens is when we -create a dependent resource we also register an Event Source that will propagate events regarding the changes of that -resource. This way we avoid the need of polling, and can implement controllers very efficiently. +See also +this [blog post](https://csviri.medium.com/java-operator-sdk-introduction-to-event-sources-a1aab5af4b7b) +. -![Alt text for broken image link](../assets/images/event-sources.png) +Event sources are a relatively simple yet powerful and extensible concept to trigger controller +executions, usually based on changes to dependent resources. You typically need an event source +when you want your `Reconciler` to be triggered when something occurs to secondary resources +that might affect the state of your primary resource. This is needed because a given +`Reconciler` will only listen by default to events affecting the primary resource type it is +configured for. Event sources act as listen to events affecting these secondary resources so +that a reconciliation of the associated primary resource can be triggered when needed. Note that +these secondary resources need not be Kubernetes resources. Typically, when dealing with +non-Kubernetes objects or services, we can extend our operator to handle webhooks or websockets +or to react to any event coming from a service we interact with. This allows for very efficient +controller implementations because reconciliations are then only triggered when something occurs +on resources affecting our primary resources thus doing away with the need to periodically +reschedule reconciliations. + +![Event Sources architecture diagram](../assets/images/event-sources.png) There are few interesting points here: -The CustomResourceEvenSource event source is a special one, which sends events regarding changes of our custom resource, -this is an event source which is always registered for every controller by default. An event is always related to a -custom resource. Concurrency is still handled for you, thus we still guarantee that there is no concurrent execution of -the controller for the same custom resource ( -there is parallel execution if an event is related to another custom resource instance). + +The `CustomResourceEvenSource` event source is a special one, responsible for handling events +pertaining to changes affecting our primary resources. This `EventSource` is always registered +for every controller automatically by the SDK. It is important to note that events always relate +to a given primary resource. Concurrency is still handled for you, even in the presence of +`EventSource` implementations, and the SDK still guarantees that there is no concurrent execution of +the controller for any given primary resource (though, of course, concurrent/parallel executions +of events pertaining to other primary resources still occur as expected). ### Caching and Event Sources -Typically, when we work with Kubernetes (but possibly with others), we manage the objects in a declarative way. This is -true also for Event Sources. For example if we watch for changes of a Kubernetes Deployment object in the -InformerEventSource, we always receive the whole object from the Kubernetes API. Later when we try to reconcile in the -controller (not using events) we would like to check the state of this deployment (but also other dependent resources), -we could read the object again from Kubernetes API. However since we watch for the changes, we know that we always -receive the most up-to-date version in the Event Source. So naturally, what we can do is cache the latest received -objects (in the Event Source) and read it from there if needed. This is the preferred way, since it reduces the number -of requests to Kubernetes API server, and leads to faster reconciliation cycles. - -Note that when an operator starts and the first reconciliation is executed the caches are already populated for example -for `InformerEventSource`. Currently, this is not true however for `PerResourceEventSource`, where the cache might or -might not be populated. To handle this situation elegantly methods are provided which checks the object in cache, if -not found tries to get it from the supplier. See related [method](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/polling/PerResourcePollingEventSource.java#L146) +Kubernetes resources are handled in a declarative manner. The same also holds true for event +sources. For example, if we define an event source to watch for changes of a Kubernetes Deployment +object using an `InformerEventSource`, we always receive the whole associated object from the +Kubernetes API. This object might be needed at any point during our reconciliation process and +it's best to retrieve it from the event source directly when possible instead of fetching it +from the Kubernetes API since the event source guarantees that it will provide the latest +version. Not only that, but many event source implementations also cache resources they handle +so that it's possible to retrieve the latest version of resources without needing to make any +calls to the Kubernetes API, thus allowing for very efficient controller implementations. + +Note after an operator starts, caches are already populated by the time the first reconciliation +is processed for the `InformerEventSource` implementation. However, this does not necessarily +hold true for all event source implementations (`PerResourceEventSource` for example). The SDK +provides methods to handle this situation elegantly, allowing you to check if an object is +cached, retrieving it from a provided supplier if not. See +related [method](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/polling/PerResourcePollingEventSource.java#L146) . ### Registering Event Sources -To register event sources `Reconciler` has to -implement [`EventSourceInitializer`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/EventSourceInitializer.java) -interface and init a list of event sources to register. The easiest way to see it is -on [tomcat example](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/sample-operators/tomcat-operator/src/main/java/io/javaoperatorsdk/operator/sample/TomcatReconciler.java) +To register event sources, your `Reconciler` has to implement the +[`EventSourceInitializer`](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/EventSourceInitializer.java) +interface and initiliaze a list of event sources to register. One way to see this in action is +to look at the +[tomcat example](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/sample-operators/tomcat-operator/src/main/java/io/javaoperatorsdk/operator/sample/TomcatReconciler.java) (irrelevant details omitted): ```java @@ -416,51 +484,65 @@ on [tomcat example](https://github.com/java-operator-sdk/java-operator-sdk/blob/ @ControllerConfiguration public class TomcatReconciler implements Reconciler, EventSourceInitializer { - @Override - public List prepareEventSources(EventSourceContext context) { - var configMapEventSource = - new InformerEventSource<>(InformerConfiguration.from(Deployment.class, context) - .withLabelSelector(SELECTOR) - .withSecondaryToPrimaryMapper(Mappers.fromAnnotation(ANNOTATION_NAME,ANNOTATION_NAMESPACE) - .build(), context)); - return EventSourceInitializer.nameEventSources(configMapEventSource); - } + @Override + public List prepareEventSources(EventSourceContext context) { + var configMapEventSource = + new InformerEventSource<>(InformerConfiguration.from(Deployment.class, context) + .withLabelSelector(SELECTOR) + .withSecondaryToPrimaryMapper( + Mappers.fromAnnotation(ANNOTATION_NAME, ANNOTATION_NAMESPACE) + .build(), context)); + return EventSourceInitializer.nameEventSources(configMapEventSource); + } ... } ``` -In the example above an `InformerEventSource` is registered (more on this specific eventsource later). Multiple things -are going on here: - -1. In the background `SharedIndexInformer` (class from fabric8 Kubernetes client) is created. This will watch and produce events for - `Deployments` in every namespace, but will filter them based on label. So `Deployments` which are not managed by - `tomcat-operator` (the label is not present on them) will not trigger a reconciliation. -2. In the next step - an [InformerEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/informer/InformerEventSource.java) - is created, which wraps the `SharedIndexInformer`. In addition to that a mapping functions is provided, - with `withSecondaryToPrimaryMapper`, this maps the event of the watched resource (in this case `Deployment`) to the - custom resources to reconcile. Note that usually this is covered by a default mapper , when `Deployment` - is created with an owner reference, the default mapper gets the mapping information from there. Thus, - the `ResourceID` what identifies the custom resource to reconcile is created from the owner reference. - For sake of the example a mapper is added that maps secondary to primary resource based on annotations. - -Note that a set of `ResourceID` is returned, this is usually just a set with one element. The possibility to specify -multiple values are there to cover some rare corner cases. If an irrelevant resource is observed, an empty set can -be returned to not reconcile any custom resource. +In the example above an `InformerEventSource` is configured and registered. +`InformerEventSource` is one of the bundled `EventSource` implementations that JOSDK provides to +cover common use cases. ### Managing Relation between Primary and Secondary Resources -As already touched in previous section, a `SecondaryToPrimaryMapper` is required to map events to trigger reconciliation -of the primary resource. By default, this is handled with a mapper that utilizes owner references. If an owner reference -cannot be used (for example resources are in different namespace), other mapper can be provided, typically an annotation -based on is provided. - -Adding a `SecondaryToPrimaryMapper` is typically sufficient when there is a one-to-many relationship between primary and -secondary resources. The secondary resources can be mapped to its primary owner, and this is enough information to also -get the resource using the API from the context in reconciler: `context.getSecondaryResources(...)`. There are however -cases when to map the other way around this mapper is not enough, a `PrimaryToSecondaryMapper` is required. -This is typically when there is a many-to-one or many-to-many relationship between resources, thus the primary resource -is referencing a secondary resources. In these cases the mentioned reverse mapper is required to work properly. +Event sources let your operator know when a secondary resource has changed and that your +operator might need to reconcile this new information. However, in order to do so, the SDK needs +to somehow retrieve the primary resource associated with which ever secondary resource triggered +the event. In the `Tomcat` example above, when an event occurs on a tracked `Deployment`, the +SDK needs to be able to identify which `Tomcat` resource is impacted by that change. + +Seasoned Kubernetes users already know one way to track this parent-child kind of relationship: +using owner references. Indeed, that's how the SDK deals with this situation by default as well, +that is, if your controller properly set owner references on your secondary resources, the SDK +will be able to follow that reference back to your primary resource automatically without you +having to worry about it. + +However, owner references cannot always be used as they are restricted to operating within a +single namespace (i.e. you cannot have an owner reference to a resource in a different namespace) +and are, by essence, limited to Kubernetes resources so you're out of luck if your secondary +resources live outside of a cluster. + +This is why JOSDK provides the `SecondayToPrimaryMapper` interface so that you can provide +alternative ways for the SDK to identify which primary resource needs to be reconciled when +something occurs to your secondary resources. We even provide some of these alternatives in the +[Mappers](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/informer/Mappers.java) +class. + +Note that, while a set of `ResourceID` is returned, this set usually consists only of one +element. It is however possible to return multiple values or even no value at all to cover some +rare corner cases. Returning an empty set means that the mapper considered the secondary +resource event as irrelevant and the SDK will thus not trigger a reconciliation of the primary +resource in that situation. + +Adding a `SecondaryToPrimaryMapper` is typically sufficient when there is a one-to-many relationship +between primary and secondary resources. The secondary resources can be mapped to its primary +owner, and this is enough information to also get these secondary resources from the `Context` +object that's passed to your `Reconciler`. + +There are however cases when this isn't sufficient and you need to provide an explicit mapping +between a primary resource and its associated secondary resources using an implementation of the +`PrimaryToSecondaryMapper` interface. This is typically needed when there are many-to-one or +many-to-many relationships between primary and secondary resources, e.g. when the primary resource +is referencing secondary resources. See [PrimaryToSecondaryIT](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework/src/test/java/io/javaoperatorsdk/operator/PrimaryToSecondaryIT.java) integration test for a sample. @@ -468,85 +550,119 @@ integration test for a sample. There are multiple event-sources provided out of the box, the following are some more central ones: -1. [InformerEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/informer/InformerEventSource.java) - - is there to cover events for all Kubernetes resources. Provides also a cache to use during the reconciliation. - Basically no other event source required to watch Kubernetes resources. -2. [PerResourcePollingEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/polling/PerResourcePollingEventSource.java) - - is used to poll external API, which don't support webhooks or other event notifications. It extends the abstract - [CachingEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/CachingEventSource.java) - to support caching. See [MySQL Schema sample](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/sample-operators/mysql-schema/src/main/java/io/javaoperatorsdk/operator/sample/MySQLSchemaReconciler.java) for usage. -3. [PollingEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/polling/PollingEventSource.java) - is similar to `PerResourceCachingEventSource` only it not polls a specific API separately per custom resource, but - periodically and independently of actually observed custom resources. -5. [SimpleInboundEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/inbound/SimpleInboundEventSource.java) - and [CachingInboundEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/inbound/CachingInboundEventSource.java) - is used to handle incoming events from webhooks and messaging systems. -6. [ControllerResourceEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/controller/ControllerResourceEventSource.java) - - an eventsource that is automatically registered to listen to the changes of the main - resource the operation manages, it also maintains a cache of those objects that can be accessed from the Reconciler. - -More on the philosophy of the non Kubernetes API related event source see in issue [#729](https://github.com/java-operator-sdk/java-operator-sdk/issues/729). +#### `InformerEventSource` + +[InformerEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/informer/InformerEventSource.java) +is probably the most important `EventSource` implementation to know about. When you create an +`InformerEventSource`, JOSDK will automatically create and register a `SharedIndexInformer`, a +fabric8 Kubernetes client class, that will listen for events associated with the resource type +you configured your `InformerEventSource` with. If you want to listen to Kubernetes resource +events, `InformerEventSource` is probably the only thing you need to use. It's highly +configurable so you can tune it to your needs. Take a look at +[InformerConfiguration](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/config/informer/InformerConfiguration.java) +and associated classes for more details but some interesting features we can mention here is the +ability to filter events so that you can only get notified for events you care about. A +particularly interesting feature of the `InformerEventSource`, as opposed to using your own +informer-based listening mechanism is that caches are particularly well optimized preventing +reconciliations from being triggered when not needed and allowing efficient operators to be written. + +#### `PerResourcePollingEventSource` + +[PerResourcePollingEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/polling/PerResourcePollingEventSource.java) +is used to poll external APIs, which don't support webhooks or other event notifications. It +extends the abstract +[CachingEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/CachingEventSource.java) +to support caching. +See [MySQL Schema sample](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/sample-operators/mysql-schema/src/main/java/io/javaoperatorsdk/operator/sample/MySQLSchemaReconciler.java) +for usage. + +#### `PollingeEventSource` + +[PollingEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/polling/PollingEventSource.java) +is similar to `PerResourceCachingEventSource` except that, contrary to that event source, it +doesn't poll a specific API separately per resource, but periodically and independently of +actually observed primary resources. + +#### Inbound event sources + +[SimpleInboundEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/inbound/SimpleInboundEventSource.java) +and +[CachingInboundEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/inbound/CachingInboundEventSource.java) +are used to handle incoming events from webhooks and messaging systems. + +#### `ControllerResourceEventSource` + +[ControllerResourceEventSource](https://github.com/java-operator-sdk/java-operator-sdk/blob/main/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/controller/ControllerResourceEventSource.java) +is a special `EventSource` implementation that you will never have to deal with directly. It is, +however, at the core of the SDK is automatically added for you: this is the main event source +that listens for changes to your primary resources and triggers your `Reconciler` when needed. +It features smart caching and is really optimized to minimize Kubernetes API accesses and avoid +triggering unduly your `Reconciler`. + +More on the philosophy of the non Kubernetes API related event source see in +issue [#729](https://github.com/java-operator-sdk/java-operator-sdk/issues/729). ## Contextual Info for Logging with MDC -Logging is enhanced with additional contextual information using [MDC](http://www.slf4j.org/manual.html#mdc). This -following attributes are available in most parts of reconciliation logic and during the execution of the controller: +Logging is enhanced with additional contextual information using +[MDC](http://www.slf4j.org/manual.html#mdc). The following attributes are available in most +parts of reconciliation logic and during the execution of the controller: -| MDC Key | Value added from Custom Resource | -| :--- | :--- | -| `resource.apiVersion` | `.apiVersion` | -| `resource.kind` | `.kind` | -| `resource.name` | `.metadata.name` | -| `resource.namespace` | `.metadata.namespace` | -| `resource.resourceVersion` | `.metadata.resourceVersion` | -| `resource.generation` | `.metadata.generation` | -| `resource.uid` | `.metadata.uid` | +| MDC Key | Value added from primary resource | +| :--- |:----------------------------------| +| `resource.apiVersion` | `.apiVersion` | +| `resource.kind` | `.kind` | +| `resource.name` | `.metadata.name` | +| `resource.namespace` | `.metadata.namespace` | +| `resource.resourceVersion` | `.metadata.resourceVersion` | +| `resource.generation` | `.metadata.generation` | +| `resource.uid` | `.metadata.uid` | For more information about MDC see this [link](https://www.baeldung.com/mdc-in-log4j-2-logback). ## Dynamically Changing Target Namespaces -A controller can be configured to watch a specific set of namespaces in addition of the -namespace in which it is currently deployed or the whole cluster. The framework supports -dynamically changing the list of these namespaces while the operator is running. -When a reconciler is registered, an instance of +A controller can be configured to watch a specific set of namespaces in addition of the +namespace in which it is currently deployed or the whole cluster. The framework supports +dynamically changing the list of these namespaces while the operator is running. +When a reconciler is registered, an instance of [`RegisteredController`](https://github.com/java-operator-sdk/java-operator-sdk/blob/ec37025a15046d8f409c77616110024bf32c3416/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/RegisteredController.java#L5) -is returned, providing access to the methods allowing users to change watched namespaces as the +is returned, providing access to the methods allowing users to change watched namespaces as the operator is running. -A typical scenario would probably involve extracting the list of target namespaces from a -`ConfigMap` or some other input but this part is out of the scope of the framework since this is -use-case specific. For example, reacting to changes to a `ConfigMap` would probably involve -registering an associated `Informer` and then calling the `changeNamespaces` method on +A typical scenario would probably involve extracting the list of target namespaces from a +`ConfigMap` or some other input but this part is out of the scope of the framework since this is +use-case specific. For example, reacting to changes to a `ConfigMap` would probably involve +registering an associated `Informer` and then calling the `changeNamespaces` method on `RegisteredController`. ```java - public static void main(String[] args) throws IOException { - KubernetesClient client = new DefaultKubernetesClient(); - Operator operator = new Operator(client); - RegisteredController registeredController = operator.register(new WebPageReconciler(client)); +public static void main(String[]args)throws IOException{ + KubernetesClient client=new DefaultKubernetesClient(); + Operator operator=new Operator(client); + RegisteredController registeredController=operator.register(new WebPageReconciler(client)); operator.installShutdownHook(); operator.start(); - - // call registeredController further while operator is running - } + + // call registeredController further while operator is running + } ``` -If watched namespaces change for a controller, it might be desirable to propagate these changes to -`InformerEventSources` associated with the controller. In order to express this, -`InformerEventSource` implementations interested in following such changes need to be +If watched namespaces change for a controller, it might be desirable to propagate these changes to +`InformerEventSources` associated with the controller. In order to express this, +`InformerEventSource` implementations interested in following such changes need to be configured appropriately so that the `followControllerNamespaceChanges` method returns `true`: ```java @ControllerConfiguration public class MyReconciler - implements Reconciler, EventSourceInitializer{ + implements Reconciler, EventSourceInitializer { - @Override - public Map prepareEventSources( + @Override + public Map prepareEventSources( EventSourceContext context) { InformerEventSource configMapES = @@ -556,45 +672,44 @@ public class MyReconciler return EventSourceInitializer.nameEventSources(configMapES); } - + } ``` -As seen in the above code snippet, the informer will have the initial namespaces inherited from controller, but -also will adjust the target namespaces if it changes for the controller. +As seen in the above code snippet, the informer will have the initial namespaces inherited from +controller, but also will adjust the target namespaces if it changes for the controller. -See also the [integration test](https://github.com/java-operator-sdk/java-operator-sdk/blob/ec37025a15046d8f409c77616110024bf32c3416/operator-framework/src/test/java/io/javaoperatorsdk/operator/sample/changenamespace/ChangeNamespaceTestReconciler.java) +See also +the [integration test](https://github.com/java-operator-sdk/java-operator-sdk/blob/ec37025a15046d8f409c77616110024bf32c3416/operator-framework/src/test/java/io/javaoperatorsdk/operator/sample/changenamespace/ChangeNamespaceTestReconciler.java) for this feature. ## Monitoring with Micrometer ## Automatic Generation of CRDs -Note that this is feature of [Fabric8 Kubernetes Client](https://github.com/fabric8io/kubernetes-client) not the JOSDK. -But it's worth to mention here. +Note that this feature is provided by the +[Fabric8 Kubernetes Client](https://github.com/fabric8io/kubernetes-client), not JOSDK itself. -To automatically generate CRD manifests from your annotated Custom Resource classes, you only need to add the following -dependencies to your project: +To automatically generate CRD manifests from your annotated Custom Resource classes, you only need +to add the following dependencies to your project: ```xml - io.fabric8 - crd-generator-apt - provided + io.fabric8 + crd-generator-apt + provided ``` -The CRD will be generated in `target/classes/META-INF/fabric8` (or in `target/test-classes/META-INF/fabric8`, if you use -the `test` scope) with the CRD name suffixed by the generated spec version. For example, a CR using -the `java-operator-sdk.io` group with a `mycrs` plural form will result in 2 files: +The CRD will be generated in `target/classes/META-INF/fabric8` (or +in `target/test-classes/META-INF/fabric8`, if you use the `test` scope) with the CRD name +suffixed by the generated spec version. For example, a CR using the `java-operator-sdk.io` group +with a `mycrs` plural form will result in 2 files: - `mycrs.java-operator-sdk.io-v1.yml` - `mycrs.java-operator-sdk.io-v1beta1.yml` **NOTE:** -> Quarkus users using the `quarkus-operator-sdk` extension do not need to add any extra dependency to get their CRD generated as this is handled by the extension itself. - - - - +> Quarkus users using the `quarkus-operator-sdk` extension do not need to add any extra dependency +> to get their CRD generated as this is handled by the extension itself. \ No newline at end of file diff --git a/docs/documentation/getting-started.md b/docs/documentation/getting-started.md index 72a06df93c..6188feda93 100644 --- a/docs/documentation/getting-started.md +++ b/docs/documentation/getting-started.md @@ -9,35 +9,44 @@ permalink: /docs/getting-started ## Introduction & Resources on Operators -Operators are easy and simple way to manage resource on Kubernetes clusters but -also outside the cluster. The goal of this SDK is to allow writing operators in Java by -providing a nice API and handling common issues regarding the operators on framework level. - -For an introduction, what is an operator see this [blog post](https://blog.container-solutions.com/kubernetes-operators-explained). +Operators manage both cluster and non-cluster resources on behalf of Kubernetes. This Java +Operator SDK (JOSDK) aims at making it as easy as possible to write Kubernetes operators in Java +using an API that should feel natural to Java developers and without having to worry about many +low-level details that the SDK handles automatically. -You can read about the common problems what is this operator framework is solving for you [here](https://blog.container-solutions.com/a-deep-dive-into-the-java-operator-sdk). +For an introduction on operators, please see this +[blog post](https://blog.container-solutions.com/kubernetes-operators-explained). + +You can read about the common problems JOSDK is solving for you +[here](https://blog.container-solutions.com/a-deep-dive-into-the-java-operator-sdk). + +You can also refer to the +[Writing Kubernetes operators using JOSDK blog series](https://developers.redhat.com/articles/2022/02/15/write-kubernetes-java-java-operator-sdk) +. ## Getting Started -The easiest way to get started with SDK is start [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) and -execute one of our [examples](https://github.com/java-operator-sdk/samples/tree/main/mysql-schema). -There is a dedicated page to describe how to [use samples](/docs/using-samples). +The easiest way to get started with SDK is to start +[minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) and +execute one of our [examples](https://github.com/java-operator-sdk/samples/tree/main/mysql-schema). +There is a dedicated page to describe how to [use the samples](/docs/use-samples). -Here are the main steps to develop the code and deploy the operator to a Kubernetes cluster. A more detailed and specific -version can be found under `samples/mysql-schema/README.md`. +Here are the main steps to develop the code and deploy the operator to a Kubernetes cluster. +A more detailed and specific version can be found under `samples/mysql-schema/README.md`. -1. Setup kubectl to work with your Kubernetes cluster of choice. +1. Setup `kubectl` to work with your Kubernetes cluster of choice. 1. Apply Custom Resource Definition 1. Compile the whole project (framework + samples) using `mvn install` in the root directory -1. Run the main class of the sample you picked and check out the sample's README to see what it does. -When run locally the framework will use your Kubernetes client configuration (in ~/.kube/config) to make the connection -to the cluster. This is why it was important to set up kubectl up front. +1. Run the main class of the sample you picked and check out the sample's README to see what it + does. When run locally the framework will use your Kubernetes client configuration (in `~/. + kube/config`) to establish a connection to the cluster. This is why it was important to set + up `kubectl` up front. 1. You can work in this local development mode to play with the code. 1. Build the Docker image and push it to the registry 1. Apply RBAC configuration 1. Apply deployment configuration -1. Verify if the operator is up and running. Don't run it locally anymore to avoid conflicts in processing events from -the cluster's API server. +1. Verify if the operator is up and running. Don't run it locally anymore to avoid conflicts in + processing events from the cluster's API server. diff --git a/docs/documentation/glossary.md b/docs/documentation/glossary.md index 88861d211a..146280cb3f 100644 --- a/docs/documentation/glossary.md +++ b/docs/documentation/glossary.md @@ -7,9 +7,11 @@ permalink: /docs/glossary # Glossary -- **Primary Resource** - the resource that represents the desired state that the controller is working - to achieve. While this is often a Custom Resource, it can be also be a Kubernetes native resource (Deployment, - ConfigMape,...). +- **Primary Resource** - the resource that represents the desired state that the controller is + working + to achieve. While this is often a Custom Resource, it can be also be a Kubernetes native + resource (Deployment, + ConfigMap,...). - **Secondary Resource** - any resource that the controller needs to manage the reach the desired state represented by the primary resource. These resources can be created, updated, deleted or simply read depending on the use case. For example, the `Deployment` controller manages `ReplicatSet` diff --git a/docs/documentation/intro-operators.md b/docs/documentation/intro-operators.md index f69cb2895a..d014f03ff2 100644 --- a/docs/documentation/intro-operators.md +++ b/docs/documentation/intro-operators.md @@ -13,6 +13,7 @@ This page provides a selection of articles that gives an introduction to Kuberne - [Introduction of the concept of Kubernetes Operators](https://blog.container-solutions.com/kubernetes-operators-explained) - [Operator pattern explained in Kubernetes documentation](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) - - [An explanation why Java Operators makes sense](https://blog.container-solutions.com/cloud-native-java-infrastructure-automation-with-kubernetes-operators) + - [An explanation why Java Operators makes sense](https://blog.container-solutions.com/cloud-native-java-infrastructure-automation-with-kubernetes-operators) - [What are the problems an operator framework is solving](https://csviri.medium.com/deep-dive-building-a-kubernetes-operator-sdk-for-java-developers-5008218822cb) + - [Writing Kubernetes operators using JOSDK blog series](https://developers.redhat.com/articles/2022/02/15/write-kubernetes-java-java-operator-sdk) diff --git a/docs/documentation/patterns-best-practices.md b/docs/documentation/patterns-best-practices.md index 20dafcf30d..b23e141dc5 100644 --- a/docs/documentation/patterns-best-practices.md +++ b/docs/documentation/patterns-best-practices.md @@ -7,79 +7,101 @@ permalink: /docs/patterns-best-practices # Patterns and Best Practices -This document describes patterns and best practices, to build and run operators, and how to implement them in terms of -Java Operator SDK. +This document describes patterns and best practices, to build and run operators, and how to +implement them in terms of the Java Operator SDK (JOSDK). -See also best practices in [Operator SDK](https://sdk.operatorframework.io/docs/best-practices/best-practices/). +See also best practices +in [Operator SDK](https://sdk.operatorframework.io/docs/best-practices/best-practices/). ## Implementing a Reconciler ### Reconcile All The Resources All the Time -The reconciliation can be triggered by events from multiple sources. It could be tempting to check the events and -reconcile just the related resource or subset of resources that the controller manages. However, this is **considered as -an anti-pattern** in operators. If triggered, all resources should be reconciled. Usually this means only comparing the -target state with the current state in the cache for most of the resource. The reason behind this is events not reliable -In general, this means events can be lost. In addition to that the operator can crash and while down will miss events. +The reconciliation can be triggered by events from multiple sources. It could be tempting to check +the events and reconcile just the related resource or subset of resources that the controller +manages. However, this is **considered an anti-pattern** for operators because the distributed +nature of Kubernetes makes it difficult to ensure that all events are always received. If, for +some reason, your operator doesn't receive some events, if you do not reconcile the whole state, +you might be operating with improper assumptions about the state of the cluster. This is why it +is important to always reconcile all the resources, no matter how tempting it might be to only +consider a subset. Luckily, JOSDK tries to make it as easy and efficient as possible by +providing smart caches to avoid unduly accessing the Kubernetes API server and by making sure +your reconciler is only triggered when needed. -In addition to that such approach might even complicate implementation logic in the `Reconciler`, since parallel -execution of the reconciler is not allowed for the same custom resource, there can be multiple events received for the -same resource or dependent resource during an ongoing execution, ordering those events could be also challenging. - -Since there is a consensus regarding this in the industry, from v2 the events are not even accessible for -the `Reconciler`. +Since there is a consensus regarding this topic in the industry, JOSDK does not provide +event access from `Reconciler` implementations anymore starting with version 2 of the framework. ### EventSources and Caching -As mentioned above during a reconciliation best practice is to reconcile all the dependent resources managed by the -controller. This means that we want to compare a target state with the actual state of the cluster. Reading the actual -state of a resource from the Kubernetes API Server directly all the time would mean a significant load. Therefore, it's -a common practice to instead create a watch for the dependent resources and cache their latest state. This is done -following the Informer pattern. In Java Operator SDK, informer is wrapped into an EventSource, to integrate it with the -eventing system of the framework, resulting in `InformerEventSource`. - -A new event that triggers the reconciliation is propagated when the actual resource is already in cache. So in -reconciler what should be just done is to compare the target calculated state of a dependent resource of the actual -state from the cache of the event source. If it is changed or not in the cache it needs to be created, respectively -updated. +As mentioned above during a reconciliation best practice is to reconcile all the dependent resources +managed by the controller. This means that we want to compare a desired state with the actual +state of the cluster. Reading the actual state of a resource from the Kubernetes API Server +directly all the time would mean a significant load. Therefore, it's a common practice to +instead create a watch for the dependent resources and cache their latest state. This is done +following the Informer pattern. In Java Operator SDK, informers are wrapped into an `EventSource`, +to integrate it with the eventing system of the framework. This is implemented by the +`InformerEventSource` class. + +A new event that triggers the reconciliation is only propagated to the `Reconciler` when the actual +resource is already in cache. `Reconciler` implementations therefore only need to compare the +desired state with the observed one provided by the cached resource. If the resource cannot be +found in the cache, it therefore needs to be created. If the actual state doesn't match the +desired state, the resource needs to be updated. ### Idempotency -Since all the resources are reconciled during an execution and an execution can be triggered quite often, also retries -of a reconciliation can happen naturally in operators, the implementation of a `Reconciler` -needs to be idempotent. Luckily, since operators are usually managing already declarative resources, this is trivial to -do in most cases. +Since all resources should be reconciled when your `Reconciler` is triggered and reconciliations +can be triggered multiple times for any given resource, especially when retry policies are in +place, it is especially important that `Reconciler` implementations be idempotent, meaning that +the same observed state should result in exactly the same outcome. This also means that +operators should generally operate in stateless fashion. Luckily, since operators are usually +managing declarative resources, ensuring idempotency is usually not difficult. ### Sync or Async Way of Resource Handling -In an implementation of reconciliation there can be a point when reconciler needs to wait a non-insignificant amount of -time while a resource gets up and running. For example, reconciler would do some additional step only if a Pod is ready -to receive requests. This problem can be approached in two ways synchronously or asynchronously. - -The async way is just return from the reconciler, if there are informers properly in place for the target resource, -reconciliation will be triggered on change. During the reconciliation the pod can be read from the cache of the informer -and a check on it's state can be conducted again. The benefit of this approach is that it will free up the thread, so it -can be used to reconcile other resources. - -The sync way would be to periodically poll the cache of the informer for the pod's state, until the target state is -reached. This would block the thread until the state is reached, which in some cases could take quite long. - -## Why to Have Automated Retries? - -Automatic retries are in place by default, it can be fine-tuned, but in general it's not advised to turn of automatic -retries. One of the reasons is that issues like network error naturally happen and are usually solved by a retry. -Another typical situation is for example when a dependent resource or the custom resource is updated, during the update -usually there is optimistic version control in place. So if someone updated the resource during reconciliation, maybe -using `kubectl` or another process, the update would fail on a conflict. A retry solves this problem simply by executing -the reconciliation again. +Depending on your use case, it's possible that your reconciliation logic needs to wait a +non-insignificant amount of time while the operator waits for resources to reach their desired +state. For example, you `Reconciler` might need to wait for a `Pod` to get ready before +performing additional actions. This problem can be approached either synchronously or +asynchronously. + +The asynchronous way is to just exit the reconciliation logic as soon as the `Reconciler` +determines that it cannot complete its full logic at this point in time. This frees resources to +process other primary resource events. However, this requires that adequate event sources are +put in place to monitor state changes of all the resources the operator waits for. When this is +done properly, any state change will trigger the `Reconciler` again and it will get the +opportunity to finish its processing + +The synchronous way would be to periodically poll the resources' state until they reach their +desired state. If this is done in the context of the `reconcile` method of your `Reconciler` +implementation, this would block the current thread for possibly a long time. It's therefore +usually recommended to use the asynchronous processing fashion. + +## Why have Automatic Retries? + +Automatic retries are in place by default and can be configured to your needs. It is also +possible to completely deactivate the feature, though we advise against it. The main reason +configure automatic retries for your `Reconciler` is due to the fact that errors occur quite +often due to the distributed nature of Kubernetes: transient network errors can be easily dealt +with by automatic retries. Similarly, resources can be modified by different actors at the same +time so it's not unheard of to get conflicts when working with Kubernetes resources. Such +conflicts can usually be quite naturally resolved by reconciling the resource again. If it's +done automatically, the whole process can be completely transparent. ## Managing State -When managing only kubernetes resources an explicit state is not necessary about the resources. The state can be -read/watched, also filtered using labels. Or just following some naming convention. However, when managing external -resources, there can be a situation for example when the created resource can only be addressed by an ID generated when -the resource was created. This ID needs to be stored, so on next reconciliation it could be used to addressing the -resource. One place where it could go is the status sub-resource. On the other hand by definition status should be just -the result of a reconciliation. Therefore, it's advised in general, to put such state into a separate resource usually a -Kubernetes Secret or ConfigMap or a dedicated CustomResource, where the structure can be also validated. +Thanks to the declarative nature of Kubernetes resources, operators that deal only with +Kubernetes resources can operator in a stateless fashion, i.e. they do not need to maintain +information about the state of these resources, as it should be possible to completely rebuild +the resource state from its representation (that's what declarative means, after all). +However, this usually doesn't hold true anymore when dealing with external resources and it +might be necessary for the operator to keep track of this external state so that it is available +when another reconciliation occurs. While such state could be put in the primary resource's +status sub-resource, this could become quickly difficult to manage if a lot of state needs to be +tracked. It also goes against the best practice that a resource's status should represent the +actual resource state, when its spec represents the desired state. Putting state that doesn't +striclty represent the resource's actual state is therefore discouraged. Instead, it's +advised to put such state into a separate resource meant for this purpose such as a +Kubernetes Secret or ConfigMap or even a dedicated Custom Resource, which structure can be more +easily validated. diff --git a/docs/documentation/workflows.md b/docs/documentation/workflows.md index e5454b0425..24ace2d7c1 100644 --- a/docs/documentation/workflows.md +++ b/docs/documentation/workflows.md @@ -7,49 +7,59 @@ permalink: /docs/workflows ## Overview -Kubernetes (k8s) does not have notion of a resource "depends on" on another k8s resource, -in terms of in what order a set of resources should be reconciled. However, Kubernetes operators are used to manage also -external (non k8s) resources. Typically, when an operator manages a service, after the service is first deployed -some additional API calls are required to configure it. In this case the configuration step depends -on the service and related resources, in other words the configuration needs to be reconciled after the service is -up and running. - -The intention behind workflows is to make it easy to describe more complex, almost arbitrary scenarios in a declarative -way. While [dependent resources](https://javaoperatorsdk.io/docs/dependent-resources) describes a logic how a single -resources should be reconciled, workflows describes the process how a set of target resources should be reconciled. - -Workflows are defined as a set of [dependent resources](https://javaoperatorsdk.io/docs/dependent-resources) (DR) -and dependencies between them, along with some conditions that mainly helps define optional resources and -pre- and post-conditions to describe expected states of a resource at a certain point in the workflow. - -## Elements of Workflow - -- **Dependent resource** (DR) - are the resources which are managed in reconcile logic. -- **Depends-on relation** - if a DR `B` depends on another DR `A`, means that `B` will be reconciled after `A`. -- **Reconcile precondition** - is a condition that needs to be fulfilled before the DR is reconciled. This allows also - to define optional resources, that for example only created if a flag in a custom resource `.spec` has some - specific value. -- **Ready postcondition** - checks if a resource could be considered "ready", typically if pods of a deployment are up - and running. -- **Delete postcondition** - during the cleanup phase it can be used to check if the resources is successfully deleted, - so the next resource on which the target resources depends can be deleted as next step. +Kubernetes (k8s) does not have the notion of a resource "depending on" on another k8s resource, +at least not in terms of the order in which these resources should be reconciled. Kubernetes +operators typically need to reconcile resources in order because these resources' state often +depends on the state of other resources or cannot be processed until these other resources reach +a given state or some condition holds true for them. Dealing with such scenarios are therefore +rather common for operators and the purpose of the workflow feature of the Java Operator SDK +(JOSDK) is to simplify supporting such cases in a declarative way. Workflows build on top of the +[dependent resources](https://javaoperatorsdk.io/docs/dependent-resources) feature. +While dependent resources focus on how a given secondary resource should be reconciled, +workflows focus on orchestrating how these dependent resources should be reconciled. + +Workflows describe how as a set of +[dependent resources](https://javaoperatorsdk.io/docs/dependent-resources) (DR) depend on one +another, along with the conditions that need to hold true at certain stages of the +reconciliation process. + +## Elements of Workflow + +- **Dependent resource** (DR) - are the resources being managed in a given reconciliation logic. +- **Depends-on relation** - a `B` DR depends on another `A` DR if `B` needs to be reconciled + after `A`. +- **Reconcile precondition** - is a condition on a given DR that needs to be become true before the + DR is reconciled. This also allows to define optional resources that would, for example, only be + created if a flag in a custom resource `.spec` has some specific value. +- **Ready postcondition** - is a condition on a given DR to prevent the workflow from + proceeding until the condition checking whether the DR is ready holds true +- **Delete postcondition** - is a condition on a given DR to check if the reconciliation of + dependents can proceed after the DR is supposed to have been deleted ## Defining Workflows -Similarly to dependent resources, there are two ways to define workflows, in managed and standalone manner. +Similarly to dependent resources, there are two ways to define workflows, in managed and standalone +manner. ### Managed -Annotations can be used to declaratively define a workflow for the reconciler. In this case the workflow is executed -before the `reconcile` method is called. The result of the reconciliation is accessed through the `context` object. - -Following sample shows a hypothetical sample to showcase all the elements, where there are two resources a Deployment and -a ConfigMap, where the ConfigMap depends on the deployment. Deployment has a ready condition so, the config map is only -reconciled after the Deployment and only if it is ready (see ready-postcondition). The ConfigMap has attached reconcile -precondition, therefore it is only reconciled if that condition holds. In addition to that has a delete-postCondition, -thus only considered to be deleted if that condition holds. +Annotations can be used to declaratively define a workflow for a `Reconciler`. Similarly to how +things are done for dependent resources, managed workflows execute before the `reconcile` method +is called. The result of the reconciliation can be accessed via the `Context` object that is +passed to the `reconcile` method. + +The following sample shows a hypothetical use case to showcase all the elements: the primary +`TestCustomResource` resource handled by our `Reconciler` defines two dependent resources, a +`Deployment` and a `ConfigMap`. The `ConfigMap` depends on the `Deployment` so will be +reconciled after it. Moreover, the `Deployment` dependent resource defines a ready +post-condition, meaning that the `ConfigMap` will not be reconciled until the condition defined +by the `Deployment` becomes `true`. Additionally, the `ConfigMap` dependent also defines a +reconcile pre-condition, so it also won't be reconciled until that condition becomes `true`. The +`ConfigMap` also defines a delete post-condition, which means that the workflow implementation +will only consider the `ConfigMap` deleted until that post-condition becomes `true`. ```java + @ControllerConfiguration(dependents = { @Dependent(name = DEPLOYMENT_NAME, type = DeploymentDependentResource.class, readyPostcondition = DeploymentReadyCondition.class), @@ -60,14 +70,14 @@ thus only considered to be deleted if that condition holds. }) public class SampleWorkflowReconciler implements Reconciler, Cleaner { - - public static final String DEPLOYMENT_NAME = "deployment"; - + + public static final String DEPLOYMENT_NAME = "deployment"; + @Override public UpdateControl reconcile( WorkflowAllFeatureCustomResource resource, - Context context) { - + Context context) { + resource.getStatus() .setReady( context.managedDependentResourceContext() // accessing workflow reconciliation results @@ -80,19 +90,22 @@ public class SampleWorkflowReconciler implements Reconciler, public DeleteControl cleanup(WorkflowAllFeatureCustomResource resource, Context context) { // emitted code - + return DeleteControl.defaultDelete(); - } + } } ``` -### Standalone +### Standalone -In this mode workflow is built manually using [standalone dependent resources](https://javaoperatorsdk.io/docs/dependent-resources#standalone-dependent-resources) -. The workflow is created using a builder, that is explicitly called in the reconciler (from web page sample): +In this mode workflow is built manually +using [standalone dependent resources](https://javaoperatorsdk.io/docs/dependent-resources#standalone-dependent-resources) +. The workflow is created using a builder, that is explicitly called in the reconciler (from web +page sample): ```java + @ControllerConfiguration( labelSelector = WebPageDependentsWorkflowReconciler.DEPENDENT_RESOURCE_LABEL_SELECTOR) public class WebPageDependentsWorkflowReconciler @@ -112,11 +125,11 @@ public class WebPageDependentsWorkflowReconciler public WebPageDependentsWorkflowReconciler(KubernetesClient kubernetesClient) { initDependentResources(kubernetesClient); workflow = new WorkflowBuilder() - .addDependent(configMapDR).build() - .addDependent(deploymentDR).build() - .addDependent(serviceDR).build() - .addDependent(ingressDR).withReconcileCondition(new IngressCondition()).build() - .build(); + .addDependentResource(configMapDR) + .addDependentResource(deploymentDR) + .addDependentResource(serviceDR) + .addDependentResource(ingressDR).withReconcilePrecondition(new ExposedIngressCondition()) + .build(); } @Override @@ -136,52 +149,70 @@ public class WebPageDependentsWorkflowReconciler webPage.setStatus(createStatus(result)); return UpdateControl.patchStatus(webPage); } - // emitted code + // omitted code } ``` -## Workflow Execution +## Workflow Execution -This section describes how a workflow is executed in details, how is the ordering determined and how condition and -errors affect the behavior. The workflow execution as also its API denotes, can be divided to into two parts, -the reconciliation and cleanup. [Cleanup](https://javaoperatorsdk.io/docs/features#the-reconcile-and-cleanup) is +This section describes how a workflow is executed in details, how the ordering is determined and +how conditions and errors affect the behavior. The workflow execution is divided in two parts +similarly to how `Reconciler` and `Cleaner` behavior are separated. +[Cleanup](https://javaoperatorsdk.io/docs/features#the-reconcile-and-cleanup) is executed if a resource is marked for deletion. - ## Common Principles -- **As complete as possible execution** - when a workflow is reconciled, it tries to reconcile as many resources as - possible. Thus is an error happens or a ready condition is not met for a resources, all the other independent resources - will be still reconciled. This is the opposite to fail-fast approach. The assumption is that eventually in this way the - overall desired state is achieved faster than with a fail fast approach. -- **Concurrent reconciliation of independent resources** - the resources which are not dependent on each are processed - concurrently. The level of concurrency is customizable, could be set to one if required. By default, workflows use - the executor service from [ConfigurationService](https://github.com/java-operator-sdk/java-operator-sdk/blob/6f2a252952d3a91f6b0c3c38e5e6cc28f7c0f7b3/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/config/ConfigurationService.java#L120-L120) +- **As complete as possible execution** - when a workflow is reconciled, it tries to reconcile as + many resources as possible. Thus if an error happens or a ready condition is not met for a + resources, all the other independent resources will be still reconciled. This is the opposite + to a fail-fast approach. The assumption is that eventually in this way the overall state will + converge faster towards the desired state than would be the case if the reconciliation was + aborted as soon as an error occurred. +- **Concurrent reconciliation of independent resources** - the resources which doesn't depend on + others are processed concurrently. The level of concurrency is customizable, could be set to + one if required. By default, workflows use the executor service + from [ConfigurationService](https://github.com/java-operator-sdk/java-operator-sdk/blob/6f2a252952d3a91f6b0c3c38e5e6cc28f7c0f7b3/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/config/ConfigurationService.java#L120-L120) ## Reconciliation -This section describes how a workflow is executed, first the rules are defined, then are explained on samples: +This section describes how a workflow is executed, considering first which rules apply, then +demonstrated using examples: ### Rules - 1. DR is reconciled if it does not depend on another DR, or ALL the DRs it depends on are ready. In case it - has a reconcile-precondition that condition must be met too. (So here ready means that it is successfully - reconciled - without any error - and if it has a ready condition that condition is met). - 2. If a reconcile-precondition of a DR is not met, it is deleted. If there are dependent resources which depends on it - are deleted too as first - this applies recursively. That means that DRs are always deleted in revers order compared - how are reconciled. - 3. Delete is called on a dependent resource if as described in point 2. it (possibly transitively) depends on a DR which - did not meet it's reconcile condition, and has no DRs depends on it, or if the DR-s which depends on it are already - successfully deleted (within actual execution). "Delete is called" means, that the dependent resource is checked - if it implements `Deleter` interface, if implements it but do not implement `GarbageCollected` interface, - the `Deleter.delete` method called. If a DR does not implement `Deleter` interface, it is considered as deleted - automatically. Successfully deleted means, that it is deleted and if a delete-postcondition is present it is met. - +1. A workflow is a Directed Acyclic Graph (DAG) build from the DRs and their associated + `depends-on` relations. +2. Root nodes, i.e. nodes in the graph that do not depend on other nodes are reconciled first, + in a parallel manner. +2. A DR is reconciled if it does not depend on any other DRs, or *ALL* the DRs it depends on are + reconciled and ready. If a DR defines a reconcile pre-condition, then this condition must + become `true` before the DR is reconciled. +2. A DR is considered *ready* if it got successfully reconciled and any ready post-condition it + might define is `true`. +3. If a DR's reconcile pre-condition is not met, this DR is deleted. All of the DRs that depend + on the dependent resource being considered are also recursively deleted. This implies that + DRs are deleted in reverse order compared the one in which they are reconciled. The reason + for this behavior is (Will make a more detailed blog post about the design decision, much deeper + than the reference documentation) + The reasoning behind this behavior is as follows: a DR with a reconcile pre-condition is only + reconciled if the condition holds `true`. This means that if the condition is `false` and the + resource didn't exist already, then the associated resource would not be created. To ensure + idempotency (i.e. with the same input state, we should have the same output state), from this + follows that if the condition doesn't hold `true` anymore, the associated resource needs to + be deleted because the resource shouldn't exist/have been created. +4. For a DR to be deleted by a workflow, it needs to implement the `Deleter` interface, in which + case its `delete` method will be called, unless it also implements the `GarbageCollected` + interface. If a DR doesn't implement `Deleter` it is considered as automatically deleted. If + a delete post-condition exists for this DR, it needs to become `true` for the workflow to + consider the DR as successfully deleted. + ### Samples -Notation: The arrows depicts reconciliation ordering, or in depends-on relation in reverse direction: -`1 --> 2` mean `DR 2` depends-on `DR 1`. +Notation: The arrows depicts reconciliation ordering, thus following the reverse direction of the +`depends-on` relation: +`1 --> 2` mean `DR 2` depends-on `DR 1`. #### Reconcile Sample @@ -195,13 +226,15 @@ stateDiagram-v2 -- At the workflow the reconciliation of the nodes would happen in the following way. DR with index `1` is reconciled. - After that DR `2` and `3` is reconciled concurrently, if both finished their reconciliation, node `4` is reconciled too. -- In case for example `2` would have a ready condition, that would be evaluated as "not met", `4` would not be reconciled. - However `1`,`2` and `3` would be reconciled. -- In case `1` would have a ready condition that is not met, neither `2`,`3` or `4` would be reconciled. -- If there would be an error during the reconciliation of `2`, `4` would not be reconciled, but `3` would be - (also `1` of course). +- Root nodes (i.e. nodes that don't depend on any others) are reconciled first. In this example, + DR `1` is reconciled first since it doesn't depend on others. + After that both DR `2` and `3` are reconciled concurrently, then DR `4` once both are + reconciled sucessfully. +- If DR `2` had a ready condition and if it evaluated to as `false`, DR `4` would not be reconciled. + However `1`,`2` and `3` would be. +- If `1` had a `false` ready condition, neither `2`,`3` or `4` would be reconciled. +- If `2`'s reconciliation resulted in an error, `4` would not be reconciled, but `3` + would be (and `1` as well, of course). #### Sample with Reconcile Precondition @@ -215,25 +248,27 @@ stateDiagram-v2 -- Considering this sample for case `3` has reconcile-precondition, what is not met. In that case DR `1` and `2` would be - reconciled. However, DR `3`,`4`,`5` would be deleted in the following way. DR `4` and `5` would be deleted concurrently. - DR `3` would be deleted if `4` and `5` is deleted successfully, thus no error happened during deletion and all - delete-postconditions are met. - - If delete-postcondition for `5` would not be met `3` would not be deleted; `4` would be. - - Similarly, in there would be an error for `5`, `3` would not be deleted, `4` would be. +- If `3` has a reconcile pre-condition that is not met, `1` and `2` would be reconciled. However, + DR `3`,`4`,`5` would be deleted: `4` and `5` would be deleted concurrently but `3` would only + be deleted if `4` and `5` were deleted successfully (i.e. without error) and all existing + delete post-conditions were met. +- If `5` had a delete post-condition that was `false`, `3` would not be deleted but `4` + would still be because they don't depend on one another. +- Similarly, if `5`'s deletion resulted in an error, `3` would not be deleted but `4` would be. ## Cleanup -Cleanup works identically as delete for resources in reconciliation in case reconcile-precondition is not met, just for -the whole workflow. +Cleanup works identically as delete for resources in reconciliation in case reconcile pre-condition +is not met, just for the whole workflow. -The rule is relatively simple: +### Rules -Delete is called on a DR if there is no DR that depends on it, or if the DR-s which depends on it are -already deleted successfully (withing this execution of workflow). Successfully deleted means, that it is deleted and -if a delete-postcondition is present it is met. "Delete is called" means, that the dependent resource is checked if it -implements `Deleter` interface, if implements it but do not implement `GarbageCollected` interface, the `Deleter.delete` -method called. If a DR does not implement `Deleter` interface, it is considered as deleted automatically. +1. Delete is called on a DR if there is no DR that depends on it +2. If a DR has DRs that depend on it, it will only be deleted if all these DRs are successfully + deleted without error and any delete post-condition is `true`. +3. A DR is "manually" deleted (i.e. it's `Deleter.delete` method is called) if it implements the + `Deleter` interface but does not implement `GarbageCollected`. If a DR does not implement + `Deleter` interface, it is considered as deleted automatically. ### Sample @@ -247,30 +282,37 @@ stateDiagram-v2 -- The DRs are deleted in the following order: `4` is deleted, after `2` and `3` are deleted concurrently, after both - succeeded `1` is deleted. -- If delete-postcondition would not be met for `2`, node `1` would not be deleted. DR `4` and `3` would be deleted. -- If `2` would be errored, DR `1` would not be deleted. DR `4` and `3` would be deleted. -- if `4` would be errored, no other DR would be deleted. +- The DRs are deleted in the following order: `4` is deleted first, then `2` and `3` are deleted + concurrently, and, only after both are successfully deleted, `1` is deleted. +- If `2` had a delete post-condition that was `false`, `1` would not be deleted. `4` and `3` + would be deleted. +- If `2` was in error, DR `1` would not be deleted. DR `4` and `3` would be deleted. +- if `4` was in error, no other DR would be deleted. ## Error Handling -As mentioned before if an error happens during a reconciliation, the reconciliation of other dependent resources will -still happen. There might a case that multiple DRs are errored, therefore workflows throws an -['AggregatedOperatorException'](https://github.com/java-operator-sdk/java-operator-sdk/blob/86e5121d56ed4ecb3644f2bc8327166f4f7add72/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/AggregatedOperatorException.java) -that will contain all the related exceptions. +As mentioned before if an error happens during a reconciliation, the reconciliation of other +dependent resources will still happen, assuming they don't depend on the one that failed. If +case multiple DRs fail, the workflow would throw an +['AggregatedOperatorException'](https://github.com/java-operator-sdk/java-operator-sdk/blob/86e5121d56ed4ecb3644f2bc8327166f4f7add72/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/AggregatedOperatorException.java) +containing all the related exceptions. -The exceptions can be handled by [`ErrorStatusHandler`](https://github.com/java-operator-sdk/java-operator-sdk/blob/86e5121d56ed4ecb3644f2bc8327166f4f7add72/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/AggregatedOperatorException.java) +The exceptions can be handled +by [`ErrorStatusHandler`](https://github.com/java-operator-sdk/java-operator-sdk/blob/14620657fcacc8254bb96b4293eded84c20ba685/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/api/reconciler/ErrorStatusHandler.java) ## Notes and Caveats -- Delete is almost always called on every resource during the cleanup. However, it might be the case that the resources - was already deleted in a previous run, or not even created. This should not be a problem, since dependent resources - usually cache the state of the resource, so are already aware that the resource not exists, thus basically doing nothing - if delete is called on an already not existing resource. -- If a resource has owner references, it will be automatically deleted by Kubernetes garbage collector if - the owner resource is marked for deletion. This might not be desirable, to make sure that delete is handled by the - workflow don't use garbage collected kubernetes dependent resource, use for example [`CRUDNoGCKubernetesDependentResource`](https://github.com/java-operator-sdk/java-operator-sdk/blob/86e5121d56ed4ecb3644f2bc8327166f4f7add72/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/CRUDNoGCKubernetesDependentResource.java). -- After a workflow executed no state is persisted regarding the workflow execution. On every reconciliation - all the resources are reconciled again, in other words the whole workflow is evaluated again. +- Delete is almost always called on every resource during the cleanup. However, it might be the case + that the resources were already deleted in a previous run, or not even created. This should + not be a problem, since dependent resources usually cache the state of the resource, so are + already aware that the resource does not exist and that nothing needs to be done if delete is + called. +- If a resource has owner references, it will be automatically deleted by the Kubernetes garbage + collector if the owner resource is marked for deletion. This might not be desirable, to make + sure that delete is handled by the workflow don't use garbage collected kubernetes dependent + resource, use for + example [`CRUDNoGCKubernetesDependentResource`](https://github.com/java-operator-sdk/java-operator-sdk/blob/86e5121d56ed4ecb3644f2bc8327166f4f7add72/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/CRUDNoGCKubernetesDependentResource.java) + . +- No state is persisted regarding the workflow execution. Every reconciliation causes all the + resources to be reconciled again, in other words the whole workflow is again evaluated. diff --git a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/KubernetesDependentResource.java b/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/KubernetesDependentResource.java index ff5cc308bf..930e5fd5b4 100644 --- a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/KubernetesDependentResource.java +++ b/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/kubernetes/KubernetesDependentResource.java @@ -30,6 +30,7 @@ import io.javaoperatorsdk.operator.processing.event.source.informer.Mappers; @Ignore +@SuppressWarnings("rawtypes") public abstract class KubernetesDependentResource extends AbstractEventSourceHolderDependentResource> implements KubernetesClientAware, diff --git a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/workflow/builder/DependentBuilder.java b/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/workflow/builder/DependentBuilder.java deleted file mode 100644 index 70991dc91d..0000000000 --- a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/workflow/builder/DependentBuilder.java +++ /dev/null @@ -1,57 +0,0 @@ -package io.javaoperatorsdk.operator.processing.dependent.workflow.builder; - -import java.util.Arrays; -import java.util.HashSet; -import java.util.Set; - -import io.fabric8.kubernetes.api.model.HasMetadata; -import io.javaoperatorsdk.operator.api.reconciler.dependent.DependentResource; -import io.javaoperatorsdk.operator.processing.dependent.workflow.Condition; -import io.javaoperatorsdk.operator.processing.dependent.workflow.DependentResourceNode; - -@SuppressWarnings("rawtypes") -public class DependentBuilder

{ - - private final WorkflowBuilder

workflowBuilder; - private final DependentResourceNode node; - - public DependentBuilder(WorkflowBuilder

workflowBuilder, DependentResourceNode node) { - this.workflowBuilder = workflowBuilder; - this.node = node; - } - - public DependentBuilder

dependsOn(Set dependentResources) { - for (var dependentResource : dependentResources) { - var dependsOn = workflowBuilder.getNodeByDependentResource(dependentResource); - node.addDependsOnRelation(dependsOn); - } - return this; - } - - public DependentBuilder

dependsOn(DependentResource... dependentResources) { - if (dependentResources != null) { - return dependsOn(new HashSet<>(Arrays.asList(dependentResources))); - } - return this; - } - - public DependentBuilder

withReconcilePrecondition(Condition reconcilePrecondition) { - node.setReconcilePrecondition(reconcilePrecondition); - return this; - } - - public DependentBuilder

withReadyPostcondition(Condition readyPostcondition) { - node.setReadyPostcondition(readyPostcondition); - return this; - } - - public DependentBuilder

withDeletePostcondition(Condition deletePostcondition) { - node.setDeletePostcondition(deletePostcondition); - return this; - } - - public WorkflowBuilder

build() { - return workflowBuilder; - } - -} diff --git a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/workflow/builder/WorkflowBuilder.java b/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/workflow/builder/WorkflowBuilder.java index 14d1b96f76..e60b26db97 100644 --- a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/workflow/builder/WorkflowBuilder.java +++ b/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/dependent/workflow/builder/WorkflowBuilder.java @@ -1,5 +1,6 @@ package io.javaoperatorsdk.operator.processing.dependent.workflow.builder; +import java.util.Arrays; import java.util.HashSet; import java.util.Set; import java.util.concurrent.ExecutorService; @@ -7,6 +8,7 @@ import io.fabric8.kubernetes.api.model.HasMetadata; import io.javaoperatorsdk.operator.api.config.ConfigurationServiceProvider; import io.javaoperatorsdk.operator.api.reconciler.dependent.DependentResource; +import io.javaoperatorsdk.operator.processing.dependent.workflow.Condition; import io.javaoperatorsdk.operator.processing.dependent.workflow.DependentResourceNode; import io.javaoperatorsdk.operator.processing.dependent.workflow.Workflow; @@ -18,14 +20,42 @@ public class WorkflowBuilder

{ private final Set> dependentResourceNodes = new HashSet<>(); private boolean throwExceptionAutomatically = THROW_EXCEPTION_AUTOMATICALLY_DEFAULT; - public DependentBuilder

addDependentResource(DependentResource dependentResource) { - DependentResourceNode node = new DependentResourceNode<>(dependentResource); - dependentResourceNodes.add(node); - return new DependentBuilder<>(this, node); + private DependentResourceNode currentNode; + + public WorkflowBuilder

addDependentResource(DependentResource dependentResource) { + currentNode = new DependentResourceNode<>(dependentResource); + dependentResourceNodes.add(currentNode); + return this; + } + + public WorkflowBuilder

dependsOn(Set dependentResources) { + for (var dependentResource : dependentResources) { + var dependsOn = getNodeByDependentResource(dependentResource); + currentNode.addDependsOnRelation(dependsOn); + } + return this; + } + + public WorkflowBuilder

dependsOn(DependentResource... dependentResources) { + if (dependentResources != null) { + return dependsOn(new HashSet<>(Arrays.asList(dependentResources))); + } + return this; } - void addDependentResourceNode(DependentResourceNode node) { - dependentResourceNodes.add(node); + public WorkflowBuilder

withReconcilePrecondition(Condition reconcilePrecondition) { + currentNode.setReconcilePrecondition(reconcilePrecondition); + return this; + } + + public WorkflowBuilder

withReadyPostcondition(Condition readyPostcondition) { + currentNode.setReadyPostcondition(readyPostcondition); + return this; + } + + public WorkflowBuilder

withDeletePostcondition(Condition deletePostcondition) { + currentNode.setDeletePostcondition(deletePostcondition); + return this; } DependentResourceNode getNodeByDependentResource(DependentResource dependentResource) { diff --git a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/PrimaryToSecondaryMapper.java b/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/PrimaryToSecondaryMapper.java index 866a2b2251..68fb9f505f 100644 --- a/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/PrimaryToSecondaryMapper.java +++ b/operator-framework-core/src/main/java/io/javaoperatorsdk/operator/processing/event/source/PrimaryToSecondaryMapper.java @@ -6,7 +6,7 @@ import io.javaoperatorsdk.operator.processing.event.ResourceID; /** - * Primary to Secondary mapper only needed in some cases, typically when there it many-to-one or + * Primary to Secondary mapper only needed in some cases, typically when there is many-to-one or * many-to-many relation between primary and secondary resources. If there is owner reference (or * reference with annotations) from secondary to primary this is not needed. See * PrimaryToSecondaryIT integration tests that handles many-to-many relationship. diff --git a/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowCleanupExecutorTest.java b/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowCleanupExecutorTest.java index 6540f00157..7c1c5d6ff6 100644 --- a/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowCleanupExecutorTest.java +++ b/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowCleanupExecutorTest.java @@ -19,10 +19,10 @@ class WorkflowCleanupExecutorTest extends AbstractWorkflowExecutorTest { @Test void cleanUpDiamondWorkflow() { var workflow = new WorkflowBuilder() - .addDependentResource(dd1).build() - .addDependentResource(dr1).dependsOn(dd1).build() - .addDependentResource(dd2).dependsOn(dd1).build() - .addDependentResource(dd3).dependsOn(dr1, dd2).build() + .addDependentResource(dd1) + .addDependentResource(dr1).dependsOn(dd1) + .addDependentResource(dd2).dependsOn(dd1) + .addDependentResource(dd3).dependsOn(dr1, dd2) .build(); var res = workflow.cleanup(new TestCustomResource(), null); @@ -38,10 +38,10 @@ void cleanUpDiamondWorkflow() { @Test void dontDeleteIfDependentErrored() { var workflow = new WorkflowBuilder() - .addDependentResource(dd1).build() - .addDependentResource(dd2).dependsOn(dd1).build() - .addDependentResource(dd3).dependsOn(dd2).build() - .addDependentResource(errorDD).dependsOn(dd2).build() + .addDependentResource(dd1) + .addDependentResource(dd2).dependsOn(dd1) + .addDependentResource(dd3).dependsOn(dd2) + .addDependentResource(errorDD).dependsOn(dd2) .withThrowExceptionFurther(false) .build(); @@ -60,9 +60,8 @@ void dontDeleteIfDependentErrored() { @Test void cleanupConditionTrivialCase() { var workflow = new WorkflowBuilder() - .addDependentResource(dd1).build() + .addDependentResource(dd1) .addDependentResource(dd2).dependsOn(dd1).withDeletePostcondition(noMetDeletePostCondition) - .build() .build(); var res = workflow.cleanup(new TestCustomResource(), null); @@ -76,9 +75,8 @@ void cleanupConditionTrivialCase() { @Test void cleanupConditionMet() { var workflow = new WorkflowBuilder() - .addDependentResource(dd1).build() + .addDependentResource(dd1) .addDependentResource(dd2).dependsOn(dd1).withDeletePostcondition(metDeletePostCondition) - .build() .build(); var res = workflow.cleanup(new TestCustomResource(), null); @@ -95,11 +93,10 @@ void cleanupConditionDiamondWorkflow() { TestDeleterDependent dd4 = new TestDeleterDependent("DR_DELETER_4"); var workflow = new WorkflowBuilder() - .addDependentResource(dd1).build() - .addDependentResource(dd2).dependsOn(dd1).build() + .addDependentResource(dd1) + .addDependentResource(dd2).dependsOn(dd1) .addDependentResource(dd3).dependsOn(dd1).withDeletePostcondition(noMetDeletePostCondition) - .build() - .addDependentResource(dd4).dependsOn(dd2, dd3).build() + .addDependentResource(dd4).dependsOn(dd2, dd3) .build(); var res = workflow.cleanup(new TestCustomResource(), null); @@ -119,7 +116,7 @@ void cleanupConditionDiamondWorkflow() { void dontDeleteIfGarbageCollected() { GarbageCollectedDeleter gcDel = new GarbageCollectedDeleter("GC_DELETER"); var workflow = new WorkflowBuilder() - .addDependentResource(gcDel).build() + .addDependentResource(gcDel) .build(); var res = workflow.cleanup(new TestCustomResource(), null); diff --git a/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowReconcileExecutorTest.java b/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowReconcileExecutorTest.java index 2c81bcb50b..c0c2e9899c 100644 --- a/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowReconcileExecutorTest.java +++ b/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowReconcileExecutorTest.java @@ -7,29 +7,27 @@ import io.javaoperatorsdk.operator.processing.dependent.workflow.builder.WorkflowBuilder; import io.javaoperatorsdk.operator.sample.simple.TestCustomResource; -import static io.javaoperatorsdk.operator.processing.dependent.workflow.ExecutionAssert.*; +import static io.javaoperatorsdk.operator.processing.dependent.workflow.ExecutionAssert.assertThat; import static org.junit.jupiter.api.Assertions.assertThrows; +@SuppressWarnings("rawtypes") class WorkflowReconcileExecutorTest extends AbstractWorkflowExecutorTest { - private Condition met_reconcile_condition = + private final Condition met_reconcile_condition = (dependentResource, primary, context) -> true; - private Condition not_met_reconcile_condition = + private final Condition not_met_reconcile_condition = (dependentResource, primary, context) -> false; - private Condition metReadyCondition = + private final Condition metReadyCondition = (dependentResource, primary, context) -> true; - private Condition notMetReadyCondition = - (dependentResource, primary, context) -> false; - - private Condition notMetReadyConditionWithStatusUpdate = + private final Condition notMetReadyCondition = (dependentResource, primary, context) -> false; @Test void reconcileTopLevelResources() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(dr2).build() + .addDependentResource(dr1) + .addDependentResource(dr2) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -42,8 +40,8 @@ void reconcileTopLevelResources() { @Test void reconciliationWithSimpleDependsOn() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(dr2).dependsOn(dr1).build() + .addDependentResource(dr1) + .addDependentResource(dr2).dependsOn(dr1) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -60,9 +58,9 @@ void reconciliationWithTwoTheDependsOns() { TestDependent dr3 = new TestDependent("DR_3"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(dr2).dependsOn(dr1).build() - .addDependentResource(dr3).dependsOn(dr1).build() + .addDependentResource(dr1) + .addDependentResource(dr2).dependsOn(dr1) + .addDependentResource(dr3).dependsOn(dr1) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -81,10 +79,10 @@ void diamondShareWorkflowReconcile() { TestDependent dr4 = new TestDependent("DR_4"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(dr2).dependsOn(dr1).build() - .addDependentResource(dr3).dependsOn(dr1).build() - .addDependentResource(dr4).dependsOn(dr3).dependsOn(dr2).build() + .addDependentResource(dr1) + .addDependentResource(dr2).dependsOn(dr1) + .addDependentResource(dr3).dependsOn(dr1) + .addDependentResource(dr4).dependsOn(dr3).dependsOn(dr2) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -103,7 +101,7 @@ void diamondShareWorkflowReconcile() { @Test void exceptionHandlingSimpleCases() { var workflow = new WorkflowBuilder() - .addDependentResource(drError).build() + .addDependentResource(drError) .withThrowExceptionFurther(false) .build(); @@ -121,9 +119,9 @@ void exceptionHandlingSimpleCases() { @Test void dependentsOnErroredResourceNotReconciled() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(drError).dependsOn(dr1).build() - .addDependentResource(dr2).dependsOn(drError).build() + .addDependentResource(dr1) + .addDependentResource(drError).dependsOn(dr1) + .addDependentResource(dr2).dependsOn(drError) .withThrowExceptionFurther(false) .build(); @@ -142,10 +140,10 @@ void oneBranchErrorsOtherCompletes() { TestDependent dr3 = new TestDependent("DR_3"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(drError).dependsOn(dr1).build() - .addDependentResource(dr2).dependsOn(dr1).build() - .addDependentResource(dr3).dependsOn(dr2).build() + .addDependentResource(dr1) + .addDependentResource(drError).dependsOn(dr1) + .addDependentResource(dr2).dependsOn(dr1) + .addDependentResource(dr3).dependsOn(dr2) .withThrowExceptionFurther(false) .build(); @@ -162,9 +160,9 @@ void oneBranchErrorsOtherCompletes() { @Test void onlyOneDependsOnErroredResourceNotReconciled() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(drError).build() - .addDependentResource(dr2).dependsOn(drError, dr1).build() + .addDependentResource(dr1) + .addDependentResource(drError) + .addDependentResource(dr2).dependsOn(drError, dr1) .withThrowExceptionFurther(false) .build(); @@ -181,10 +179,9 @@ void onlyOneDependsOnErroredResourceNotReconciled() { @Test void simpleReconcileCondition() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).withReconcilePrecondition(not_met_reconcile_condition).build() - .addDependentResource(dr2).withReconcilePrecondition(met_reconcile_condition).build() + .addDependentResource(dr1).withReconcilePrecondition(not_met_reconcile_condition) + .addDependentResource(dr2).withReconcilePrecondition(met_reconcile_condition) .addDependentResource(drDeleter).withReconcilePrecondition(not_met_reconcile_condition) - .build() .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -199,11 +196,10 @@ void simpleReconcileCondition() { @Test void triangleOnceConditionNotMet() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(dr2).dependsOn(dr1).build() + .addDependentResource(dr1) + .addDependentResource(dr2).dependsOn(dr1) .addDependentResource(drDeleter).withReconcilePrecondition(not_met_reconcile_condition) .dependsOn(dr1) - .build() .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -219,15 +215,13 @@ void reconcileConditionTransitiveDelete() { TestDeleterDependent drDeleter2 = new TestDeleterDependent("DR_DELETER_2"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() + .addDependentResource(dr1) .addDependentResource(dr2).dependsOn(dr1) .withReconcilePrecondition(not_met_reconcile_condition) - .build() .addDependentResource(drDeleter).dependsOn(dr2) .withReconcilePrecondition(met_reconcile_condition) - .build() .addDependentResource(drDeleter2).dependsOn(drDeleter) - .withReconcilePrecondition(met_reconcile_condition).build() + .withReconcilePrecondition(met_reconcile_condition) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -247,12 +241,10 @@ void reconcileConditionAlsoErrorDependsOn() { TestDeleterDependent drDeleter2 = new TestDeleterDependent("DR_DELETER_2"); var workflow = new WorkflowBuilder() - .addDependentResource(drError).build() + .addDependentResource(drError) .addDependentResource(drDeleter).withReconcilePrecondition(not_met_reconcile_condition) - .build() .addDependentResource(drDeleter2).dependsOn(drError, drDeleter) .withReconcilePrecondition(met_reconcile_condition) - .build() .withThrowExceptionFurther(false) .build(); @@ -272,9 +264,9 @@ void reconcileConditionAlsoErrorDependsOn() { @Test void oneDependsOnConditionNotMet() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() - .addDependentResource(dr2).withReconcilePrecondition(not_met_reconcile_condition).build() - .addDependentResource(drDeleter).dependsOn(dr1, dr2).build() + .addDependentResource(dr1) + .addDependentResource(dr2).withReconcilePrecondition(not_met_reconcile_condition) + .addDependentResource(drDeleter).dependsOn(dr1, dr2) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -291,11 +283,10 @@ void oneDependsOnConditionNotMet() { void deletedIfReconcileConditionNotMet() { TestDeleterDependent drDeleter2 = new TestDeleterDependent("DR_DELETER_2"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() + .addDependentResource(dr1) .addDependentResource(drDeleter).dependsOn(dr1) .withReconcilePrecondition(not_met_reconcile_condition) - .build() - .addDependentResource(drDeleter2).dependsOn(dr1, drDeleter).build() + .addDependentResource(drDeleter2).dependsOn(dr1, drDeleter) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -316,13 +307,12 @@ void deleteDoneInReverseOrder() { TestDeleterDependent drDeleter4 = new TestDeleterDependent("DR_DELETER_4"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() + .addDependentResource(dr1) .addDependentResource(drDeleter).withReconcilePrecondition(not_met_reconcile_condition) .dependsOn(dr1) - .build() - .addDependentResource(drDeleter2).dependsOn(drDeleter).build() - .addDependentResource(drDeleter3).dependsOn(drDeleter).build() - .addDependentResource(drDeleter4).dependsOn(drDeleter3).build() + .addDependentResource(drDeleter2).dependsOn(drDeleter) + .addDependentResource(drDeleter3).dependsOn(drDeleter) + .addDependentResource(drDeleter4).dependsOn(drDeleter3) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -345,11 +335,10 @@ void diamondDeleteWithPostConditionInMiddle() { var workflow = new WorkflowBuilder() .addDependentResource(drDeleter).withReconcilePrecondition(not_met_reconcile_condition) - .build() - .addDependentResource(drDeleter2).dependsOn(drDeleter).build() + .addDependentResource(drDeleter2).dependsOn(drDeleter) .addDependentResource(drDeleter3).dependsOn(drDeleter) - .withDeletePostcondition(noMetDeletePostCondition).build() - .addDependentResource(drDeleter4).dependsOn(drDeleter3, drDeleter2).build() + .withDeletePostcondition(noMetDeletePostCondition) + .addDependentResource(drDeleter4).dependsOn(drDeleter3, drDeleter2) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -370,10 +359,9 @@ void diamondDeleteErrorInMiddle() { var workflow = new WorkflowBuilder() .addDependentResource(drDeleter).withReconcilePrecondition(not_met_reconcile_condition) - .build() - .addDependentResource(drDeleter2).dependsOn(drDeleter).build() - .addDependentResource(errorDD).dependsOn(drDeleter).build() - .addDependentResource(drDeleter3).dependsOn(errorDD, drDeleter2).build() + .addDependentResource(drDeleter2).dependsOn(drDeleter) + .addDependentResource(errorDD).dependsOn(drDeleter) + .addDependentResource(drDeleter3).dependsOn(errorDD, drDeleter2) .withThrowExceptionFurther(false) .build(); @@ -391,8 +379,8 @@ void diamondDeleteErrorInMiddle() { @Test void readyConditionTrivialCase() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).withReadyPostcondition(metReadyCondition).build() - .addDependentResource(dr2).dependsOn(dr1).build() + .addDependentResource(dr1).withReadyPostcondition(metReadyCondition) + .addDependentResource(dr2).dependsOn(dr1) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -407,8 +395,8 @@ void readyConditionTrivialCase() { @Test void readyConditionNotMetTrivialCase() { var workflow = new WorkflowBuilder() - .addDependentResource(dr1).withReadyPostcondition(notMetReadyCondition).build() - .addDependentResource(dr2).dependsOn(dr1).build() + .addDependentResource(dr1).withReadyPostcondition(notMetReadyCondition) + .addDependentResource(dr2).dependsOn(dr1) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -426,9 +414,9 @@ void readyConditionNotMetInOneParent() { TestDependent dr3 = new TestDependent("DR_3"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).withReadyPostcondition(notMetReadyCondition).build() - .addDependentResource(dr2).build() - .addDependentResource(dr3).dependsOn(dr1, dr2).build() + .addDependentResource(dr1).withReadyPostcondition(notMetReadyCondition) + .addDependentResource(dr2) + .addDependentResource(dr3).dependsOn(dr1, dr2) .build(); var res = workflow.reconcile(new TestCustomResource(), null); @@ -445,11 +433,10 @@ void diamondShareWithReadyCondition() { TestDependent dr4 = new TestDependent("DR_4"); var workflow = new WorkflowBuilder() - .addDependentResource(dr1).build() + .addDependentResource(dr1) .addDependentResource(dr2).dependsOn(dr1).withReadyPostcondition(notMetReadyCondition) - .build() - .addDependentResource(dr3).dependsOn(dr1).build() - .addDependentResource(dr4).dependsOn(dr2, dr3).build() + .addDependentResource(dr3).dependsOn(dr1) + .addDependentResource(dr4).dependsOn(dr2, dr3) .build(); var res = workflow.reconcile(new TestCustomResource(), null); diff --git a/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowTest.java b/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowTest.java index 01b8bc619c..df12c8af54 100644 --- a/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowTest.java +++ b/operator-framework-core/src/test/java/io/javaoperatorsdk/operator/processing/dependent/workflow/WorkflowTest.java @@ -22,9 +22,9 @@ void calculatesTopLevelResources() { var independentDR = mock(DependentResource.class); var workflow = new WorkflowBuilder() - .addDependentResource(independentDR).build() - .addDependentResource(dr1).build() - .addDependentResource(dr2).dependsOn(dr1).build() + .addDependentResource(independentDR) + .addDependentResource(dr1) + .addDependentResource(dr2).dependsOn(dr1) .build(); Set topResources = @@ -42,9 +42,9 @@ void calculatesBottomLevelResources() { var independentDR = mock(DependentResource.class); Workflow workflow = new WorkflowBuilder() - .addDependentResource(independentDR).build() - .addDependentResource(dr1).build() - .addDependentResource(dr2).dependsOn(dr1).build() + .addDependentResource(independentDR) + .addDependentResource(dr1) + .addDependentResource(dr2).dependsOn(dr1) .build(); Set bottomResources = diff --git a/sample-operators/webpage/src/main/java/io/javaoperatorsdk/operator/sample/WebPageDependentsWorkflowReconciler.java b/sample-operators/webpage/src/main/java/io/javaoperatorsdk/operator/sample/WebPageDependentsWorkflowReconciler.java index e9c5218cf8..6986180b89 100644 --- a/sample-operators/webpage/src/main/java/io/javaoperatorsdk/operator/sample/WebPageDependentsWorkflowReconciler.java +++ b/sample-operators/webpage/src/main/java/io/javaoperatorsdk/operator/sample/WebPageDependentsWorkflowReconciler.java @@ -8,14 +8,23 @@ import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.networking.v1.Ingress; import io.fabric8.kubernetes.client.KubernetesClient; -import io.javaoperatorsdk.operator.api.reconciler.*; +import io.javaoperatorsdk.operator.api.reconciler.Context; +import io.javaoperatorsdk.operator.api.reconciler.ControllerConfiguration; +import io.javaoperatorsdk.operator.api.reconciler.ErrorStatusHandler; +import io.javaoperatorsdk.operator.api.reconciler.ErrorStatusUpdateControl; +import io.javaoperatorsdk.operator.api.reconciler.EventSourceContext; +import io.javaoperatorsdk.operator.api.reconciler.EventSourceInitializer; +import io.javaoperatorsdk.operator.api.reconciler.Reconciler; +import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.javaoperatorsdk.operator.processing.dependent.kubernetes.KubernetesDependentResource; import io.javaoperatorsdk.operator.processing.dependent.kubernetes.KubernetesDependentResourceConfig; import io.javaoperatorsdk.operator.processing.dependent.workflow.Workflow; import io.javaoperatorsdk.operator.processing.dependent.workflow.builder.WorkflowBuilder; import io.javaoperatorsdk.operator.processing.event.source.EventSource; -import static io.javaoperatorsdk.operator.sample.Utils.*; +import static io.javaoperatorsdk.operator.sample.Utils.createStatus; +import static io.javaoperatorsdk.operator.sample.Utils.handleError; +import static io.javaoperatorsdk.operator.sample.Utils.simulateErrorIfRequested; /** * Shows how to implement reconciler using standalone dependent resources. @@ -32,16 +41,15 @@ public class WebPageDependentsWorkflowReconciler private KubernetesDependentResource serviceDR; private KubernetesDependentResource ingressDR; - private Workflow workflow; + private final Workflow workflow; public WebPageDependentsWorkflowReconciler(KubernetesClient kubernetesClient) { initDependentResources(kubernetesClient); workflow = new WorkflowBuilder() - .addDependentResource(configMapDR).build() - .addDependentResource(deploymentDR).build() - .addDependentResource(serviceDR).build() + .addDependentResource(configMapDR) + .addDependentResource(deploymentDR) + .addDependentResource(serviceDR) .addDependentResource(ingressDR).withReconcilePrecondition(new ExposedIngressCondition()) - .build() .build(); } @@ -71,6 +79,7 @@ public ErrorStatusUpdateControl updateErrorStatus( return handleError(resource, e); } + @SuppressWarnings("rawtypes") private void initDependentResources(KubernetesClient client) { this.configMapDR = new ConfigMapDependentResource(); this.deploymentDR = new DeploymentDependentResource();