In our current model for SQL releases, deployments are executed by a special tentacle with the role name ‘OctopusProxySQL’. During production releases we have numerous projects running simultaneously and at any one moment there may be multiple SQL deployments executing. We’re using the ‘Octopus.Acquire.MaxParallelism’ and ‘Octopus.Action.MaxParallelism’ variables to increase the number of simultaneous executions and it works great; however we’re running into an issue where several projects are running retention policies at the same time on the ‘OctopusProxy’ (and it looks like the concurrency variables aren’t being used during the process). This is creating a queue of server tasks which need to be completed before the overall deployments are considered complete. During some of our larger production releases this can add 10-15 min to the deployment times for projects unlucky enough to be at the end of the queue.
As a potential workaround I started to explore the possibility of using workers to deploy SQL scripts; however for some reason I can’t change the execution location in my test project:
After combing through the step config and template it’s not clear why this isn’t allowed. Ideally I’d like to change the target to a 4 node worker pool dedicated to deploying SQL changes to all our environments. Is this by design or am I missing something?
Bummer. Thanks for the quick reply though. There isn’t much appetite for converting our portfolio to use a new template but it’s good to know there’s an option.
Does the retention policy process respect the ‘Octopus.Acquire.MaxParallelism’ and ‘Octopus.Action.MaxParallelism’ variables during deployment? Thanks.
So we define the parallelism variables in a library set and scope different values depending on server role. When projects inheriting the variable are targeting the same server role everything works as expected (concurrent deployments) until the retention policy runs. At the point the retention policy runs, we see a bottleneck of server tasks which extend the overall deployment time:
I’m not sure if this is by design or not, so I would need to talk to our developers. Can I ask what types of tasks those were it was waiting on? Also, what version of Octopus are you currently on?
We have ‘Octopus.Acquire.MaxParallelism’, ‘Octopus.Action.MaxParallelism’ & ‘OctopusBypassDeploymentMutex’ defined in a library set available to all our projects; and scope the values differently depending on the target environment, role, etc. Based on my observations the issue isn’t with the actual execution of the process step; it’s with the retention policy that runs at the end of the release deployment. If more than one project runs a retention policy against the same server they are executed serially (regardless of the parallelism & mutex vars) which causes a queue of server tasks that blocks deployments from finishing.
As a work around I’m doing a pilot to rework our SQL deployment process to use workers vs dedicated tentacles. So for now I think we’re good.
One more quick question about how workers use the ‘Octopus.Acquire.MaxParallelism’, ‘Octopus.Action.MaxParallelism’ & ‘OctopusBypassDeploymentMutex’ vars. What values are used when the parallelism and mutex vars don’t have a “default” scope defined?
Thanks for the update. I had some discussions with others and it makes sense that the retention policy requires mutex because they all have to modify the DeploymentJournal so they can’t do it at the same time by nature of that. Retention, in general, should be very fast though so I’m surprised you noticed it. Were the other tasks it was waiting on not retention?
I’m more curious for myself and other potential users that hit this in the future.
Thanks. In our environment we have a dedicated tentacles for SQL deployments so the work folders have a large number of files. Given that we have hundreds of projects that deploy SQL changes and auditing requires that we keep releases for 1 yr, the retention policy execution takes a while to execute. The pilot I’m working on (moving deployments to worker pools), should help.
Just to be sure I’m understanding the parallelism and mutex variables correctly, the default for workers is 10 regardless of the scoping? Is that configurable and if so how?
Yeah, that makes sense why your retention may be clashing then. If you need any advice for something in specific while converting your process to workers, I can have our solutions team take a look.
This section in our docs calls out the parallelism in workers. Default is 10 and you can you configure it in a very similar way to tentacles. Workers - Octopus Deploy