Cloudy With A Chance Of Tintri - Part I

  • Posted on: 30 April 2015
  • By: David La Motta

We have been working with our friends at Tintri on a really interesting use case: using Integra for uploading a VM running in a Tintri VMstore to an AWS S3 bucket.  In this blog we are going to show you precisely how it is that we accomplish that.  Everything that goes up must come down, so in this 2-part series we are going to show you the first half: going up to the cloud.  The second half of the series will show you pulling the VM down from S3 and bringing it back to life in a Tintri VMstore

For those of you that do not know Tintri, I highly recommend you check them out if you have any sort of virtualization--and who doesn't, in this day and age?  Tintri works with VMware, Hyper-V and KVM, but the beauty doesn't lie in its multi-hypervisor abilities.  My personal favorite feature is the storage abstraction layer that Tintri places on top of the hypervisors.  We exploit that capability in this very blog, as you will see below; sure, Tintri has a ton of other cool features such as VM-level QoS guarantees, seamless VM replication between VMstores, an awesome dashboard and more, yet the simplicity of the storage layer is what makes my watch tick.

Being an automation and integration platform, Integra also benefits from other goodies that Tintri brings to the table.  We were able to take advantage of Tintri's PowerShell Toolkit in order to perform the upload to AWS S3.   They also have a very rich REST API, so if there is something that you'd like to do through those means, you certainly can.

Enough said.  Lets roll up ours sleeves, crack our knuckles, and get to it.

Architecture

Our setup is relatively straightforward, as you can see below.  All of the Integra components are running as VMs inside a Tintri VMstore; namely, we have 3 core VMs:

  1. CentOS 6.6 - runs the Integra Reactor and the AWS provider
  2. Windows Server 2012 - runs the Integra PowerShell provider.  It loads the Tintri PS Toolkit to communicate with the Tintri VMstore.
  3. vCenter Server Appliance 5.5 - to host the Integra UI
The diagram shows the overall communication paths.  The UI communicates with the Reactor, which acts as a proxy to the AWS and PS providers.  Each provider communicates directly with its endpoint: the AWS provider is responsible for performing the upload to S3, and the PS provider is responsible for operating on Tintri.  The Tintri VMstore filesystem abstraction is exposed to the AWS provider so the upload can take place (though it is not shown in the diagram).

The Tintri filesystem is extremely powerful.  Having this abstraction over the hypervisor means you don't have to deal with datastores, LUNs, or volumes for accessing files.  Instead, you simply create a read-only NFS mount on Linux or a SMB share on Windows to access the pieces that make up a virtual machine.  In our case, we have an NFS mount on the CentOS VM where the AWS provider is running, which is how the files are accessed and uploaded to S3.



Providers

We talked about AWS and PowerShell, so those are the providers we need for this blog.  In the image below you may notice the Azure provider lurking around.  Yes, you guessed it right: if your cloud storage of choice is Azure, only one step in the workflow below would have to change in order to push the VM to Microsoft's cloud.  Or, if you are feeling adventurous and would like to keep copies of your data in S3 as well as Azure, that is something you can certainly achieve with Integra, too.

Regardless, for the workflow in this blog we are going to leverage the PowerShell provider (aptly named Tintri below) and the AWS provider.



Actions

In typical Integra fashion, your duties as an Automation Architect are to configure the actions that your workflows will consume.  There are plenty of actions configured below, which you can run independently or as part of a workflow.  Just as a developer would do when writing software, you should test individual actions to make sure they will work when run as part of a workflow.  The Integra UI allows you to do this, so configure your actions and test them until you are satisfied with the result.

The observant reader will notice that there is an excess of actions than there are steps in the workflow.  As the common adage goes, "there are many ways to skin a cat".  The end result is that out of all the actions configured, the resulting set that was used in the workflow does the job in the best possible manner.  In a production environment  you will more than likely remove actions that are not used in any workflow, or tag them as such.

The image below shows the results of connecting to the Tintri VMstore, for example.



Workflow

The Protect to S3 workflow is made up of 10 easy steps, which range from loading the Tintri PS Toolkit, connecting/disconnecting to/from the VMstore, but the core of the operation lies around the following steps:

  1. Snapshot the desired VM
  2. Create a new clone from the snapshot
  3. Upload the clone to S3


10 steps and you have a Tintri VM uploaded to the cloud.  See below.

Results

We didn't create a schedule for this operation.  Instead, we manually executed the workflow directly from the Workflows tab (seen above).  This same workflow could have been executed from Integra's mobile self-service portal.  In any case, once the workflow finishes, you can see the components safely stored in the cloud.



The results in Integra obviously show a successful run; however, the purpose of the image below is to show you how the transaction ID is used.  Notice the full path in the S3 console; it shows a transaction ID followed by the full path that was uploaded.  That path corresponds to the NFS mount we spoke about moments ago.  Now check the transaction ID in the results, below.  Everything that is uploaded to S3 or Azure contains the transaction ID, which is one of the ways Integra helps for auditing purposes.




Conclusion

And there you have it, folks, this is how you upload a VM hosted in a Tintri VMstore to AWS S3 using Integra.  As promised earlier, this first part of the series focuses on uploading a VM to S3; in the second part we are going to focus on restoring the VM from the files stored in S3, and bringing that VM back to life in another Tintri VMstore.  Stay tuned for that.

We want to expressly thank the good folks at Tintri for working with us on this really cool example.  There is much more to come!

Happy Tintri'ing  ;-)

--