Automatic deployment to EKS

We are having some troubles, and after more than 2 days of struggling, I really have no idea how to figure it out.
We build the images, push them to the ECR registry. Each image will be tagged with the env and the build number.
The deployment/services files will be on github and have keys as “app: #{hello}” inside of them.
We want to use deploy yaml (or kubectl cli?) step.
Ideally, deployments will be triggered with each new master build of an image.
I hope it is all clear. We want something very continuous deployment :slight_smile:

However, here are the problems we are facing:

  1. Copy the package on the Octopus server: the deployments/services files are on github and therefore Octopus gets it as NAME/NAME, get confused because of the / and can’t copy it on the server.
  2. Extract package in kubectl cli step: it works without variables. I have found the variables where the package is extracted and put it in my cli script. It doesn’t seem to be able to get the variables, even with variables substitution on that file on. ( [/file.yaml] doesn’t exist. Written in the script as kubectl apply -f $OctopusParameters[“Octopus.Action.Package[comm-tech-k8s].ExtractedPath”]/file.yaml)

This is for getting the files/folder on the server.

  1. Deploy raw yaml: I used “app: #{hello}” in the yaml file, it doesn’t seem to like it and it returns “kubectl apply -o json” returned invalid JSON" which it doesn’t normally (without variables). It is the step I would want to use.
  2. Kubectl cli: We have a PS script running kubectl apply -f file.yaml with variables substitution, it does replace variables, but it doesn’t get recognized by the certificate. I did that step in the server and it didn’t let me run the script because it “wasn’t digitally signed” (which I believe are lack of rights and I can’t authorize manually every file at every pull). Which made me think I needed to copy the folder on the server, which brings me back to the first 2 steps.

I am starting to get very confused, any help would be GREATLY appreciated!!! :slight_smile:


Hopefully I can help. I’ll try and walkthrough step by step, but if this doesn’t clear up the confusion, I’m more than happy to video call with you to offer an assist.

I have replicated your scenario by creating a GitHub repo containing a yaml file for a simple ConfigMap.

First we’ll add an external package feed for GitHub:

In our project we’ll create the variables which will be substituted into the YAML:

In our deployment process, we’ll add a Run kubectl CLI step. The important parts are shown below:

Note you’ll have to enable the Substitute Variables in Files feature.

The key points are:

  • The GitHub release is referenced as a package.
  • We perform variable-substitution on the k8s yaml file.
  • The script body uses the extracted path of the package, and applies the YAML

When creating a release you will need to choose the version of the GitHub release, and executing it successfully creates the ConfigMap in my testing.

If you wanted to use the Raw YAML step, that is fine too.

In this sample, it looks like this:

I hope that helps. Please let me know if a video call would be of help, we are more than happy to arrange a time.

Thanks SO MUCH for the answer! The CLI step works perfectly.

I still have a little problem with the raw YAML. When using variables, the raw YAML always return

“kubectl apply -o json” returned invalid JSON. This can happen with older versions of kubectl. Please update to a recent version of kubectl.
See for more details.
Custom resources will not be saved as output variables, and will not be automatically cleaned up."

I am unsure where this could be coming from. On the server, it is the same kubectl version (and one of the latest) than my own computer (v1.12.7). Is there something incorrect at the way it is passing the file?
It gets the YAML Source from a file inside a package, we reference the deployment yaml file in “kubernetes yaml file name”.

Thanks again!!

I’m very glad to hear the CLI step is working for you now.

Regarding the raw YAML step, have you had success in getting this working?
I suspect that error message is masking the true issue. From what you say, it almost certainly isn’t related to the kubectl version. I suspect the most likely cause is something wrong in the YAML being applied.

If you attach the raw log of your deployment, we’d be happy to take a look (or you can email it if you’d prefer).