We have been evaluating vCloud Connector 2.0 for a couple of days now. And we did use the previous versions, too. We are offering public/hybrid cloud services and need a solution to migrate existing workloads from our customers vCenter servers. For us there are some issues with vCloud Connector which prevent a reasonable use.At least that is the result of our tests - maybe we are totally wrong with our assumptions.
1. Concept of moving workloads from vSphere to catalogs
While it surely makes sense to have a possibility to move existent vSphere templates to vCloud catalogs - because that is the place where they belong - i don't understand why workloads are moved like that,too.
If i want to move - or migrate - an existing workload, like a file-server, mailserver,etc. from vSphere to vCloud Director, i want that it is available directly & unchanged as vm/vapp. That is what migration means.
But instead i have it as a template in a catalog. If fast provisioning is enabled, deploying is fast, if not it takes some more time. In most of the cases there is no need to keep the template as soon as the workload is running, so it has to be deleted manually. And even i head nothing bad about linked clones so far, i just don't think that there is a need for unnecessary chaining of vms.
With vCloud Connector 2.0 it is possible to deploy the workload immediately and delete the template right away. Besides the fact that vapps is configured in fenced mode there is another - more important - catch. In our tests,a vm which was automatically deployed was configured with an IP-Adress from the pool and guest customization was activated (e.g. password reset). So instead of having an identical workload, i have something different.
2. In-Effective transport of workload data
Consider the following the scenario (which is quite common for us). Organizational network is an external network which is connected directly (bridged) to the customer's network with bandwidth up to 1 Gigabit/s. So the customer can run all his application in our cloud, while access them from there office without a difference in network performance. Access to the vCloud Director (GUI / API) is public = Internet. Internet access has a bandwidth of e.g. 20 MBit/s
So the customer deploys a vCC-Server, a vCC-Node in vSphere and a vCC-Node in vCloud Director. Since it is a direct connection, all systems are within one subnet, or at least connected without NAT and with wire-speed. When performing a workload copy, the following happens
1. Export OVF to vCC Node vCenter
2. Copy OVF to vCC Node vCloud with high speed
3. vCC Node vCloud starts import in vCloud director and transfers data over vCloud API (Internet).
That means, all the data that has been sent from one node to the other, is sent "back" over the network, out of the companies "slow" internet access, and back to vCloud Director. Consider that our customers have vm's that as big as 500-700 GB. That takes ages.
Right now we are thinking to develop a solution that exports the vm as ova/ovf and sends them to a "service-vm" in our cloud. From this service-vm the ova is deployed to resource vCenter, and then imported to vCloud Director. We only have to think about a way to connect the service-vm to both the customer's network and the resource vCenter in a secure way. Exporting/Importing could be done with ovftool and partly automated with vCenter Orchestrator.