We recently starting using Octopus Deploy with 3-4 projects so far and we suddenly have major performance issues. The server peeks at 100% CPU over a long periode of time. It hosts 2 NuGet feeds and the octopus server, but it’s the Octopus Deploy app pool that hogs the CPU (thought it was the NuGet feed that was slow, but it wasn’t). Initially we had about 1500 nupkg and I have reduced it to approx. 600 now, without luck. The machine is hosted on a VM, with 4gb RAM. Any suggestions? I’ve attached a screenshot of the server only running Octopus Deploy.
It’s hosted using NuGet.Server. I’ve still not ruled out that it might be the feeds. Right now I’m trying to move the nupkg off a network share and have them on the server instead. Additionally I’ve just upgraded to the new version, to see if this helps with the issue.
From past experience it’s usually a problem with the way NuGet.Server (and NuGet file shares) work - every package has to be loaded into memory for the hash to be calculated. The more packages you have, the more memory is used, and the more hashing has to be done.
Another customer has a similar problem with scaling their NuGet feeds and created a fork of NuGet.Server that uses Lucene - you could try this option:
We also found that using a local file-based NuGet feed with Octopus doesn’t scale very well, and switching to NuGet Server provided no benefit either. This inspired us to create a custom fork of NuGet Server that uses Lucene.Net and Lucene.Net.Linq to provide a scalable, lightning fast internal feed for Octopus.
In the mean time you could try clearing out old packages to see if that solves the issue.
That seemed to do the trick. It seems that the Nuget.Server package just
killed the CPU to a point, where I couldn’t even reach the machine with
rdp.
Thanks for the fast response!