Skip to content

Documentation feedback: Koreo Controller runtime model could be better explained up front #29

@jmcclell

Description

@jmcclell

Hi! I gave this project a test drive the past couple of days. Really enjoying the concepts so far, but have run into a number of rough edges—as one might expect given its infancy. One of the most frustrating time sinks for me early on was trying to debug issues with namespaces and the runtime model of Koreo.

It appears that the controller, any associated workflows/functions, and any triggering CRDs must all be in the same namespace for things to work. This is not what I expected, especially given that the controller has a cluster scoped role. Importantly, the documentation does not state this explicitly, either. At least not that I can find. Rather, some of the examples seem to imply that this isn't the case.

Take the advanced Hello Workload example, for instance. Its work flow

apiVersion: koreo.dev/v1beta1
kind: Workflow
metadata:
  name: hello-workload
spec:
  crdRef:
    apiGroup: example.koreo.dev
    version: v1
    kind: Workload

  steps:
    - label: create_deployment
      ref:
        kind: ResourceFunction
        name: deployment-factory
      inputs:
        name: =parent.metadata.name
        namespace: =parent.metadata.namespace
# ...snip...

implies you can create Workload resources in any namespace and the subsequent deployment (and service) resource will be created in that namespace. Except that doesn't work because the controller and the workflows won't be in that namespace unless you know to install a copy of them before adding your Workload resource there.

Is that really the intent? I haven't used k8s regularly in some time, so I may just have something misconfigured, but at a quick glance, the controller code seems to agree with my assessment.

Frustratingly, the UI can also be a tiny bit misleading if you don't understand the model up front. Here are a couple of scenarios and what I observed/thought in the moment:

Controller and Workflows in the same namespace, Workload instance in separate namespace

  • I had the controller installed to the default namespace (""), thinking I could use a single controller instance for the entire cluster
    • I enabled developer mode to ensure I didn't hit any rbac issues during the test
  • I had the workflow and its associated functions also in the default namespace, similarly thinking I could use them anywhere
  • I created a Workload in namespace foo
  • The UI showed my workflow under the default namespace with 0 instances

I took this to mean that the controller wasn't working properly and/or that my CRD or workflow were misconfigured in some way. It was not at all obvious to me why the controller wouldn't be able to find the instance.

Controller in default namespace, Worklflows and Workload instance in same namespace

  • I kept the controller as-is
  • I moved the workflow and its associated functions to the foo namespace
  • I created the Workload in the foo namespace
  • The UI no longer showed any workflows under the default namespace (as expected)
  • The UI showed my workflow under the foo namespace and it showed it had 1 instance

So now I think I'm in business, right? Except nothing actually gets created. Expanding the workflow showed 0 managed resources. This threw me a bit. I thought for sure that the fact that it found the instance meant the controller was happy, but I wasn't seeing any output and the controller logs weren't reporting any errors.

On one hand, it makes sense that the UI is disconnected from the controller logic, but it can be misleading for newcomers in these sorts of instances. It was for me, at least. :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions