Thursday, 25 August 2016

Upgrading Octopus Deploy from v2.6 - Give it some memory!

I've been tasked with upgrading Octopus Deploy to the latest version, this is for many reasons but mainly to look at the replacement to snapshotting, channels.

To test the upgrade before it is applied to live I have been using a test environment which has the same instance of Octopus that we have in live, 2.6.

The first thing to say about Octopus 3.x is that it no longer uses a NoSql database (RavenDB), it now uses SQL Server.  This has widely been blogged about but from what I've seen the SQL data structure that it uses is still similar to a NoSQL database with a NVarChar(Max) column filled with JSON.

The installation of Octopus 3.3.24 is straightforward and no really note worthy, the wizard run after installation will create the database and provide an empty installation of Octopus Deploy.
After this has been installed the next step is to migrate your existing database using a backup (with the master key).

Clicking on the "Import Data" brings up a wizard which allows you to select the Octopus backup file and enter the Master key.

The preview option will simulate the process but unfortunately the "Task logs" option does not work with the preview mode.

Our production backup file is 750MB and we use Octopus for all our deployments in our CD pipeline so we have a fair amount of deployment data.

The process to import the task logs takes a long time, I got the memory increased on our test server to 16GB and run the process and it had not completed after 17 hours.  It had consumed all of the memory but not particularly the processor.  It is the step of upgrading the documents that appears to be taking the time.

Upon Googling I found that there is a parameter than can be used to execute the upgrade process so that it limits the history that is brought over; -maxage=

This made the command line:
"C:\Program Files\Octopus Deploy\Octopus\Octopus.Migrator.exe" migrate --instance "OctopusServer" --file "C:\Octopus\20160729-140413.octobak" --master-key "abcdefghijklmnopqrstuvwxyz" --include-tasklogs -maxage=60

After checking the Migration log file I found the last entry was:
2016-08-13 20:57:07.7216      1  INFO  Step took 00:00:00s2016-08-13 20:57:07.7216      1  INFO  2016-08-13 20:57:07.7216      1  INFO  Convert documents
This didn't get updated and all of the memory on the machine (16GB) was pretty quickly consumed.
As this was running on a VM I left the machine running for a few days and the log file didn't get updated.

After raising a call with Octopus Support we found that the process requires quite a lot of memory.
Giving my VM 32GB RAM seemed to allow the migration to complete (in 20 minutes) although it was still very close to maxing out the memory.

In short, if you have a large Octopus Raven database (ours was approximately 3GB - when Windows counted the size of the Raven database) you'll need a lot of memory to upgrade, maybe more than 32GB!

Thanks to Vanessa Love (@fly401) for all the help!

1 comment:

  1. Good Info.

    Did you restart migration after upgrading your VM's RAM to 32GB and then it took just 20 minutes from start to finish? Or did you upgrade the RAM midway during your migration process?

    Thanks,
    Abi

    ReplyDelete