Failure deplying Kubernetes CronJob Yaml

I’m trying to deploy a k8s CronJob using the “Deploy raw Kubernetes YAML” step template and it’s failing to deploy with an invalid json error.

Octopus version: v2019.3.0
Kubernetes: v1.11.0+d4cacc0 (OpenShift 3.10)
kubectl version: v1.14.0

I’ve created this gist with the cronjob yaml, kubectl json, and Octopus deployment log:

Hi Jeff,

Thanks for getting in touch and apologies for the delay in responding. I’ve tried to reproduce this issue without success, using the same YAML you provided. I’m using Azure to host the cluster, which has Kubernetes 1.11.9, and I have kubectl 1.14.1 installed locally.

The only thing I can think of at this point, based on the error, is that somehow there is a different version of kubectl coming in to play somehow. Could I get you to check the Health Check task for your cluster, it should explicitly detail the server (kubenetes) and client (kubectl) versions it found.


Hi Shannon,

Here’s what the connectivity check in Octopus is logging:

kubectl version to test connectivity
Client Version: v1.14.0
Server Version: v1.11.0+d4cacc0

Is there anything else I could provide to help debug the issue?

I’ve talked to the team here who have a lot more experience in this area than I do, they can’t see anything out of place either. Suspicion is that it is something environmental or something odd ending up in the file somehow.

A suggestion was to try using a K8S script step with a script something like the follow:

$myYaml = @"
kind: CronJob
apiVersion: batch/v1beta1
  name: test-cron-job
            - image: busybox:latest
              imagePullPolicy: Always
              name: test-job
                - /bin/sh
                - -c
                - date; echo Hello from the Kubernetes cluster
          restartPolicy: Never
  schedule: "*/1 * * * *"
  successfulJobsHistoryLimit: 5
  failedJobsHistoryLimit: 5
  concurrencyPolicy: Forbid
$myYaml | Set-Content "file.yaml"
kubectl apply -f file.yaml

I’ve tried this and it works in the same context I was using previously. Can you give that a try and see if you get different behavior?


Shannon, thanks for the tip!

When I added the k8s script step and executed it, it still errored but this time it gave me an error message I could work with.

08:30:49 Error | Error from server (Forbidden): error when retrieving current configuration of:
08:30:49 Error | Resource: "batch/v1beta1, Resource=cronjobs", GroupVersionKind: "batch/v1beta1, Kind=CronJob"
08:30:49 Error | Name: "test-cron-job", Namespace: "devops-ops"
08:30:49 Error | Object: &{map["spec":map["concurrencyPolicy":"Forbid" "failedJobsHistoryLimit":'\x05' "jobTemplate":map["spec":map["template":map["spec":map["containers":[map["image":"busybox:latest" "imagePullPolicy":"Always" "name":"test-job" "args":["/bin/sh" "-c" "date; echo Hello from the Kubernetes cluster"]]] "restartPolicy":"Never"]]]] "schedule":"*/1 * * * *" "successfulJobsHistoryLimit":'\x05'] "apiVersion":"batch/v1beta1" "kind":"CronJob" "metadata":map["name":"test-cron-job" "namespace":"devops-ops" "annotations":map["":""]]]}
08:30:49 Error | from server for: "file.yaml": cronjobs.batch "test-cron-job" is forbidden: User "system:serviceaccount:devops-ops:octopus-deployer" cannot get cronjobs.batch in the namespace "devops-ops": no RBAC policy matched 

Armed with that, I added this to my service account’s role:

- apiGroups:
    - batch
    - create
    - delete
    - get
    - list
    - patch
    - update
    - cronjobs
    - jobs

And that fixed it for both the k8s script and the raw yaml steps.

Thanks for the help!