Migration from a non-tenanted projects/deployment to a tenant-based deployment

We’re currently attempting to migrate our existing configurations (projects, variablesets, lifecycles, certs, script modules, project variables, environments,etc.) from an existing Octopus server to a new Octopus server via powershell script and Octopus’ REST api. The documentation provided in https://github.com/OctopusDeploy/OctopusDeploy-Api/wiki allows us to access each of the resource in a fairly straightforward manner. However, we do get a few errors every now and then when the api performs “foreign key” checks.

 Let's take this scenario as an example. Server A and B are the existing and new Octopus servers, respectively. We typically would start by copying the LibraryVariableSets by issuing a GET on http://A/api/libraryvariablesets/all, looping through each and every library variableset returned as a PSObject and posting this PSObject as the body on the subsequent POST request to http://B/api/libraryvariablesets. I do notice that they'd be successfully created but the Id values for the same name would be different between the Server A and B for the same variable set name. Is there a way to ensure that the Id values are preserved during a POST request? I'm able to copy the variable sets without any problem other than the Id differences. However, when I follow it up by copying projects from server A to B, I do get an error indicating that the variablesets referenced by the project we're creating do not exist on server B.

   Is there any documentation we can use to find out the relative order in which we must copy the resources from one server to another? How do we preserve IDs of resources copied from server A to B? Are resource names always unique? We've also considered just backing up the database and restoring it but couldn't because we are actually performing configuration transformation/mapping. Specifically, we are:
   - Reducing the 43 environments from server A to just 6 (DEV, UAT, DEMO, STG, PRD, TEST/QA) 
     by splitting it to a tenant and environment combination on server B. Currently, each environment is created for
     each unique (customer or team) + (DEV or STG or PRD or UAT) combination and this has made the dashboard 
     very difficult to use. For example, an environment called UAT - A will be mapped to environment UAT and tenant A on
     server B.
   - When migrating the variable sets from server A to B, we want to replace the environment scope (as defined on server A) with the
      new tenant and environment scope.
   - When migrating projects, any step belonging to a project that has an environment scope on server A will be replaced by corresponding environment and tenant scopes on server B.
    - On the final step, we will copy machines from server A to B and updating its environment scope to the corresponding environment and tenant scopes on server A.

Is there a better/quicker way to go about this process? How can we pair up the existing agents running on the machines so they would register themselves to server B with the same role as before but assigned to a new environment and tenant value from server B? Has anybody done this via powershell remoting or sysinternals psexec?


Hi Tristan,

I’m following up on your previous ticket over here as this is public and it means the answers will be searchable for others. Here is the code we started for this problem when the feature was introduced: https://github.com/OctopusDeploy/TenantMigratorSample
One of the major issues we see with this process (our team had a chat) is reusing the same IDs on the new server as you will run into issues with mapping as we think you have seen. When we write migration for moving data across servers, the best process is to use names as your identifiers. If there is a way your script can be rewritten with this in mind it would be less error prone. Hopefully our start on the migration script will assist with this, or help you determine the order you are after.

Our migration process uses names and memory mapping - unfortunately it isn’t something that is public, and co-opting the process while changing data won’t really work.
Moving the entire set of data to make use of new hardware/VMs might be a simpler approach and manually create tenants and reassign machines and sets will mean you will not miss anything.