First up, we have a team that uses Octopus Deploy on multiple projects.
I recently received an interesting question. Lets say Brian logs in and changes the process to “make his deployment actually work”. Then we deploy to a test environment and everything crashes and burns. And of course we need to get the deployment done ASAP.
Is the trawling through the audit log the only way to see what Brian changed? or is there a better way?
My best crazy idea was to use the API to regularly take a JSON snapshot of the process. Then compare the daily snapshots to see whats changed. But that feels a bit like taking a sledge hammer to a nail.
Thanks for getting in touch. I can think of two main ways to handle this.
The first one would be to lock down the process steps of your “live” projects. Then whenever a change is needed, clone the entire project, make the necessary changes and perform testing. Then once you’re satisfied that the new changes are functional, unlock the “live” project and make the same changes there.
The second option would be to make use of the Subscriptions feature to create an email or webhook notification to log any changes made. You could, for example, have a webhook that posts any changes made to a project to a Slack channel. Then if a deployment begins to fail, it would just be a case of working backwards through the Slack channel reverting each change.
Useful filters for this would be:
Event Category: Document Modified
Document Types: Deployment Process, Variable Set
Please let me know if this helps, or if there is anything else I can do.
I sounds like this is a trade off between using a CD tool where your deployment process is written as a script vs using a more graphical tool like octopus.
The issue with sending a notification using the subscription feature is nobody will look at it until the next deployment when something fails. And then we will be piecing together 10 changes to see what broke the deployment.
What we might do is keep a separate space for production type work and have a sandbox space for testing new deployment processes. If we have any issues, we will just export both processes to json and compare them.
That is similar to how I ran Octopus in a production environment. I treated it the same way I would a production system and no changes were made to the deployment process without final testing to ensure deployments continued smoothly.
We have a user voice page if you want to suggest a better way of handling this.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.