Package uploaded but not delivered to custom install directory

Hi,
We are running Octopus Deploy 2.0.13.1100 .
Noticed strange problem.

We have a process consisting of several steps.The first one is “Deploy a NuGet package” step, with Custom install directory defined.
When process runs on healthy environment and all tentacles are ok every thing works good. Deployed package files are saved in custom directory as expected.

Sometimes there is a problem in one of the tentacles (low disk space for example) and then happens something unexpected.
Octopus reports about a failure to upload package to one machine (with low disk space) and successfully uploads to other [which is ok].
BUT, the problem is that the nupkg file reaches only Octopus\Applications.Tentacle\Packages and never stored to the custom install directory on healthy machine.
See attached install log :
10.100.104.13 - machine with low disk space - package upload failed.
10.100.101.14 - upload succeeded and nupgk indeed uploaded, but no files form in in custom install folder.

Again, this issue happens only when package upload failed for one machine in the environment.

octo.txt (8 KB)

Hi,

If package uploads fail, Octopus doesn’t attempt to go any further in the deployment - it doesn’t extract the package, it doesn’t copy them to the custom directory. This is by design; the assumption is that if you can’t upload all the packages you’re going to need, you probably don’t want us to start overwriting applications.

Hope that helps,

Paul

Hi Paul,

That’s right, upload failure should discontinue the process. BUT only on the machine it failed.All agreed.
The question is - Why its affects the second machine in the same environment ?
From my point of view it should continue as usual on the healthy machine.

Igal

Hi,

When package uploads fail we abort the entire deployment, not just for that one machine.

As an example, imagine you have two machines, a web server and an application server, and the web server depends on the application server to function. If we can’t upload packages to the application server, would your really want us to proceed with deployment to the web server? It would just result in a broken deployment.

The workaround is that if you know a particular machine is having problems, you can disable the machine and then try again.

Paul

Hi Paul,

Thanks for answer.
The question of progress with deployment on servers with different roles is really arguable but its indeed design solution.
In that case, when you decide discontinue deployment on all machines even if only one have a problem the user will expect to get failure status on all servers.
In fact, now I’m getting the failure status of upload step on the “problematic” server and success status for upload step on other one, where the deployment process was actually interrupted (files were not delivered to the custom install folder).

Is it possible to get failure status for such situation too ?

Thanks,
Igal

Hi Igal,

Copying of packages to the custom deployment folder occurs at a later step in the process than the upload, so this isn’t possible unfortunately - the upload is actually succeeding, it is the later deployment step that’s not being run.

Can you please let me know what you’d do with the failure info from the task, if it was possible to produce it? (We might have some other trick up our sleeves that would help :))

Best regards,
Nick

Hi Nicholas,

I checked it again and found that the whole step is reported as failed, even if one upload on particular server returns success. It’s enough for us, so we can stop the next steps from executing and not to continue in case “Acquire packages” step is failed.

Marking specific upload as failed in this case will be mostly informative, there is indeed nothing to do with that.

Thanks for your help.