Tentacle upgrade is inefficient for multiple environments on the same physical machine

I’ve recently updated from 1.6 to I’ve just done my first Tentacle upgrade after updating the server to and noticed some strange behaviour.

We use Octopus in an unusual way in that our environments map to clients, so each client has their own instance of the application. In 1.6 I was able to call each machine on the single physical server the same name, and they have the same IP address. In moving to 2.0 I needed to individually name the machines, but they’re still the same physical server.

What seems to happen in the Tentacle upgrade push is that the Octopus server doesn’t notice that a large number of the machines are actually the same tentacle, with the same thumbprint and same IP address. It therefore schedules 20+ tentacle upgrades, queing the last 19 of them whilst the first one stops, upgrades and restarts, then the next one, repeat 18 more times. This obviously ties up the server far longer than ideal. It would be quicker for me to do the upgrades manually as I only have 5 physical servers, but 50+ machines across a few projects.

I’m currently 28 minutes into the upgrade and it looks like about 1/6th of the ‘upgrades’ are done.


Paul D

Hi Paul,

Do you currently use an environment-per-customer, or something along those lines? In Octopus 2 it is easy to open up Machine X > Settings, and select many environments from the Environments list. This might help you get back to having only 5 actual machines in Octopus, which sounds like it would be more pleasant to manage.

I’ve created an issue here to track the upgrade problem: https://github.com/OctopusDeploy/Issues/issues/491

Thanks for getting in touch, hope the upgrade has finished by now :wink:


Yes, one environment per client. I did as you suggested and got the machine count down to 5, which is much simpler. Thanks.

I cancelled the upgrade task because I could see that it had managed to perform the upgrade task for at least one machine on each of the physical servers.

Great, thanks for the info.