Scaling planning


We’ve just gone past 65 servers being managed under OD, and this is likely to go past 100 sometime soon. We have that split across 6 projects, 5 nuget feeds and all of them have a CI environment deploying automatically from the TFS builds.

At the moment, it seems to hum along very well and we haven’t had any performance problems. We’re only using the built in nuget server for some configuration packages and then using a separate proget server for the main repository.

What is likely to be the bottleneck, what scale and scenarios has anyone tested? Does anyone have any tips for performance?


Hi Anthony,

Thanks for getting in touch! And sorry about the delay in responding to this.

We have some customers who not only have 700 machines in single environments, they also have deployments to 400 machines. One customer recently told us about multiple Octopus Server installations to manage their hundreds of projects and machines.
For customers who push these limits in Octopus we try to work with all feedback for any type of slowness or UI issues (such as recently the changes to the tasks page with searching and better pagination). We are also working on better testing scenarios for these installations as well, as we want it to work. Some customers with the really big numbers really make use of the API to get around any slowness issues with UI loading, or really just to manage their systems as having scripts is much easier for templating purposes.