Frequently hang on deploying large Azure web app

4 logs attached. Combination of hung/success and dev/staging servers.

Some facts:

  • Octopus Deploy server 3.2.8
  • Deployed app size: 240MB and 3300 files
  • Successful deployment takes 4-6 minutes, where timestamp checking makes up about 2-3 of those minutes
  • Azure web app S1 Standard tier app service plan with two web apps hosted on it, total load is very low
  • Upload speed of big binary to web app FTP is stable at 1.5-1.7 MiB/s, meaning it should theoretically complete a full upload of web app in about 3.5 minutes, and practice have shown a clean upload (no timestamp checking) to take about 4-6 minutes too.
  • Frequency varies but we almost always have to retry once or twice for each deployment, sometimes it fails several times in a row
  • Clean upload (delete deployment slot first) always succeeds, or at least the 4-5 times we tried it, which indicates that timestamp checking is one of the culprits here.
  • Deploy from Visual Studio seems to work well, at least the 2 times we have tried it so far. Might need to test more here to be certain.
  • A hung deployment’s last log output is always timestamp checking.

ServerTasks-3261.log- (1 MB)

ServerTasks-3263.log- (3 MB)

ServerTasks-3264.log- (1 MB)

ServerTasks-3266.log- (3 MB)

I see the same behavior with an App of 184 Mb. It takes four retries to deploy it to the Azure Web App.

I also frequently see hangs that look exactly like the files shared earlier. Our app will deploy and then at some point just stop updating the log files, these can sit there for an hour or more and never complete or have new log messages.

We had another issue today with a hang where it looked like it got past the deploy step but then never proceeded to the next step. I have attached that log.

Details on our setup:

  • Octopus Deploy - 3.2.17
  • Server and tentacle (same machine) running on an Azure VM (North Central US)
  • Uploading to a Azure Web App (also North Central US)
  • App w/ issues is 150mb packaged - 250mb unpacked ~ 5,600 files.
  • Hangs somewhere greater than 50% of the time sometimes takes 4-5 attempts to deploy 1 build.
  • So far seem to be fairly successful with deploying again after restarting the web slot in Azure management portal (after failure).

ServerTasks-3799.log.txt (880 KB)

@andrew @Joost Any updates here.
We are experiencing the same behavior.

Same error messages as reported in this ticket:

We have 4 apps being deployed and often times a retry is required because of a “hang” or an error.

FYI. Getting rid of parallel processes fixed it. We decided to bump up our Octopus VM as deployments were maxing out our CPU. Will update as we do more testing.
We are still seeing what appears to be a file check for every file. But at least the deployment didn’t fail on the Octopus Server.

Any update here, please?

Hi Anjdreas,

Thanks for getting back in touch. I’ll try and summarise what we’ve done so far to address this issue. I can see you’ve been participating in a few threads about this issue, so I apologise if I’m repeating anything you already know.

The core issue

The examples we have seen so far look look like threading dead-locks in WebDeploy. Microsoft.Web.Deployment has reportedly had issues with locking when using Checksum for file comparisons.

The solution

You can now use Timestamp instead of Checksum and see if this fixes the issue for your deployments. Other customers have reported success. Requires Octopus 3.3.3 or greater.

  • You will need to modify existing steps to use Timestamp (we do not automatically switch them over)
  • Any new Azure Web App steps you create will use Timestamp by default.

A note about timestamps

To get the most benefit of using timestamps for WebDeploy sync, start using zip packages instead of NuGet packages. We cannot maintain consistent timestamps when extracting NuGet packages - see this GitHub Issue for more information.

Plans for the future

We will continue to roll out further improvements to Azure Web App support, though we don’t have any fixed delivery timeframes for these:

  • ASP.NET Core 1.0 support
  • Improved Checksum support (bug fixes in WebDeploy)
  • Improved Proxy support (but fixes in WebDeploy)

All of these rely upon us upgrading to Microsoft.Web.Deploy.3.6.0 (track this GitHub Issue). We are working with Microsoft to update the NuGet package, otherwise we will have package and ship it ourselves. Release Notes for WebDeploy 3.6-beta

Thank you for the detailed response, I will try the Timestamp solution and report back. Also look forward to the bugfixes you mention, could you please update this ticket when that happens? I’m sure others will find that info useful too.

We use Octopack with TeamCity and so far doing the upgrade and setting the option to Timestamp hasn’t made a difference.

Hi Mark,

Thanks for getting back. So I understand better, are your deployments consistently hanging even after changing to use Timestamp instead of Checksum?

Forgive the extra questions, but I would like to get to the bottom of this:

  • Did you create a new release after changing to Timestamp? (Otherwise the deployment of an existing Release would still use Checksum)

Could you send through a failing and working deployment log as a new private post so I can analyse it further?


I just wanted to point out that this also happens with smaller web apps. My websites average around 35mb and I also experience this problem.

Hey @Michael,

I stand corrected. I went through and logged the past few weeks worth of deployments. It appears to have cut the time down by at least 1/2 in most cases even more.

4/1/2016 1.1.578 6 Minutes
4/1/2016 1.1.577 6 Minutes
4/1/2016 1.1.576 9 Minutes
3/31/2016 1.1.575 5 Minutes
3/29/2016 1.1.568 14 Minutes
3/16/2016 1.1.545 13 Minutes
3/15/2016 1.1.544 15 Minutes

We made the Timestamps settings changes in Octopus, then triggered new builds & deployments from TeamCity.

I was mistaken. Apologies and thank you for tackling this.


Hi Mark,

Thanks for your feedback. I’ve published a blog post to update others who may not be on these threads:

Hope that helps!

HI Michael,

Thank you for the POST.

One note, one of the options you recommend, is to try doing a deploy to a new slot and then switching. A few questions that blocked us when considering this, but perhaps you have the answer.

  1. We use, others i’m sure use NewRelic for Server monitoring because currently the Azure monitoring is just not powerful enough. To do so however, you have to install extensions and you have to do this via the portal there is no way that we could find to automate and configure these installations. Have you seen some other way?

  2. Application Settings: if we were to do this, then we would need to keep all of our Application settings/ Configs in Octopus or manage somewhere else and pulled in during deploy. What are you suggesting around this?

Hi Mark,

1. Stackify

APMs like Stackify and NewRelic can be installed in different ways. One way NewRelic is installed as a NuGet package, and included when you build, package, and eventually deploy your application.

Stackify offers a Web App Extension (which I think you’re using) and you would need to add that Web App Extension to any new Site or Slot as part of the deployment, and uses some AppSettings for your API key etc. In Octopus I would do this as follows:

  1. Azure Script Step: Create Web App (if required) adding the Web App Extension
  2. Azure Script Step: Use PowerShell to set your appSettings for Stackify for the Web App
  3. Azure Web App Step: As per normal

Sadly I haven’t been able to find a PowerShell CmdLet for adding Web App Extensions, and that’s something you may find before I do, or raise an issue here

2. AppSettings

The PowerShell CmdLet for appSettings is Set-AzureRMWebApp

The benefit of this is that you can replace Slots (or even Web Apps) any time without needing special handling - at least once you can add WebApp Extensions via PowerShell…

Hope that helps!

Thanks Mike,

Re: Stackify
The automated install of an extension is where we were struggling as well.

Re: App Settings
We are going to consider this, since manually editing them in Azure is not scalable.

Thanks again.