Deploy to Kubernetes cluster step, how to specify imagePullSecrets mentions “imagePullSecrets” in 2 of the Deployment resource YAML files, the value used is “octopus-feedcred-feeds-dockerhub-with-creds”.

But I cannot find a place in the documentation that describes how to configure that value. Neither can I find that option when checking the different configuration options on my development Octopus installation. How does one configure this value? Eg how can I configure the deployment to use “imagePullSecrets” with the value of “my-registry-secret” for example?

By default, when using the Deploy Kubernetes Container step Octopus will take it upon itself to create those secrets using the feed credentials you have supplied. The examples in the documentation aim to provide some understanding about what the yaml files actually look like under the hood. Have you tried using an authenticated docker feed and found the secrets are missing?
We may need to add more documentation to point this behaviour out if it is not already apparent.
Let me know if there are still any misunderstandings or it is not working as described.

@anon61686641 I am using the AWS Elastic Container Registry feed. I have configured it to use Access Key and Secret Key.
I am positive that this part works at least on Octopus as I am able to create releases and the versions match those present in the AWS Elastic Container Registry. But I am also positive that no credentials are created automatically on Kubernetes during the deployment process and the image pull fails because no credentials are provided.

I see that there are 2 different container registries: “Docker Container Registry” and “AWS Elastic Container Registry”, do I have to do something differently when using the AWS registry?

Hi Megan,
​ I’ve talked about this with a colleague and it sounds like with ECR, so long as your node is an EC2 instance, then it should be able to pull down the credentials automatically so long as the right IAM permissions are available. The problem with ECR feeds is that the credentials are only temporarily available for 12 hours before they become expired so if we were to put any credentials into the cluster like a standard private repository then it may not be valid when k8s needs it (short of adding a cronjob to keep it up to date).
​Are your nodes running in AWS? If so could you check that it is running with the appropriate permissions are outlined in the above link.