Kubernetes and retention policy

Is any relation between retention policy for releases in projects and resources in K8S?
How looks the lifecycle for example ConfigMaps (per deployment not global)?
Are You consider cleanup all resources via ReleaseId or DeploymentId label? I think about running job (in deploy process) labeled by ReleaseId, so autocleanup via retention policy will be great in this case?

Hi Sebastian,

If you’re using the built-in Octopus Kubernetes steps (i.e. Deploy Containers to Kubernetes), then the resources such as config maps will be cleaned.

It is not currently tied to retention policies, but rather happens at deployment time. When you run a deployment, Octopus labels the created resources with the step that created them as well as the environment (and tenant if applicable). When the next deployment is run, any matching previously created resources (config maps, secrets, etc) will be deleted once the deployment is complete.

Does this behavior match what you were hoping for?


Looks great, but in details:

  • previous resources will be deleted on the end successed current deploy? (what I think is required for blue-green deploy)
  • current resources will be deleted when current deploy (maybe step) fail?

The resources will be deleted after the deployment step executes successfully. It does not require the overall deployment to succeed.

This document explains in some detail how the blue-green deployment mode behaves.

If the step fails, no resources will be deleted. This is consistent with the Octopus philosophy of not attempting to roll-back if a deployment fails. Deleting the resources in this case would be just as likely to be wrong as correct.

If you encounter anything that doesn’t behave the way would hope or expect, please let us know. We are very willing to implement feedback.


Thanks, everything is clearly