Step by step HCX Deployment & Configuration
VMware HCX is a sophisticated platform used for workload migration and migration without the need for ip address change across the various cloud. The beauty of the tool is it makes migration a very seamless experience. HCX is a free option available with VMware cloud on AWS subscription.
In this blog, I want to focus on HCX deployment and how to configure HCX for VMware Cloud on AWS without too much wording. Before starting with the implementation, let’s look at the prerequisites for the HCX deployment (in case of multiple service meshes, multiple NE appliances, keep in mind that it needs more IP addresses, for details click on here)
Prerequisites:
- VMware Cloud on AWS SDDC already deployed and vCenter can be accessed using vmwarecloud@vmc.local credentials
- Administrator account for On-Prem vCenter
- 3 IP addresses from the Management network (HCX-IX, HCX-WO & HCX-NE)
- 2 IP addresses for the Uplink network (HCX-IX & HCX-NE)
- 1 IP address from the vMotion Network (HCX-IX)
- Distributed Switch and vDS port-group/VLAN to extend L2 networks (HCX-NE)
- Management Network that HCX appliances get deployed can’t be L2-Extended
- If you are connected via Direction Connect: non-overlapping private xx.xx.xx.xx/29 Network for HCX uplink (directConnectionNetwork1) profile.
- Enough compute and storage resources to deploy HCX appliances ((HCX-IX, HCX-WO & HCX-NE)
- Open the following Ports on FW
Port |
Protocol |
Source |
Destination |
4500 |
UDP |
Network Extension (HCX-NE On-Prem) |
Network Extension (HCX-NE in VMware Cloud on AWS) |
4500 |
UDP |
Interconnect (HCX-IX On-Prem) |
Interconnect (HCX-IX in VMware Cloud on AWS) |
443 |
TCP |
HCX Connector (On-Prem) |
HCX Cloud (VMware Cloud on AWS) |
443 |
TCP |
HCX Connector (On-Prem) |
|
443 |
TCP |
HCX Connector (On-Prem) |
For deployment and configuration simplification, I’ll break down whole deployment into 3 sections
1st Section: Deploy HCX-Cloud in VMware Cloud on AWS (Target Site)
2nd Section: Deploy HCX at On-Prem vCenter (Source Site) and activate the license
3rd Section: Configure HCX (Site Paring, Compute & Network Profile setup & Service Mesh creation), extend L2 network and perform the migration.
I will configured On-Prem Network Profile for HCX deployment as below:
Management-Network-Profile: (Same as ESXi mgmt. segment; used by HCX appliance for the management interfaces and vSphere replication for bulk migration)
vMotion-Network-Profile: (Same as vMotion segment; used by HCX Interconnect for the VMs mobility/vMotion to Cloud)
Uplink-Network-Profile: (Separated L3 routed network; used by HCX Interconnect and HCX-NE for the IX pairing and tunneling)
1st Section: Deploy HCX-Cloud in VMware Cloud on AWS (Target Site)
Log into the VMC Console at https://vmc.vmware.com, click on “Inventory” – “SDDCs” –“VIEW DETAILS” that you want to deploy HCX into
Go to “Adds Ons” and click on “OPEN HCX”
It will open new tab, Navigate “SDDCs” and DEPLOY HCX
It will pop up Confirm wizard, click on “CONFIRM”
Now you can see the deployment in progress, it takes 15-20 minutes or more
During the deployment progress, if you go VMware Cloud on AWS SDDC vCenter, you will see HCX-Cloud appliance is being uploaded
Once the deployment completes, click on “OPEN”
To access HCX console you need to have firewall rules in place, you can configure it from VMware on AWS SDDC console. Navigate “Networking & Security”, Under the Gateway Firewall click on “Management Gateway” and “ADD RULE” and publish the rule as per a screenshot below (you need HTTPS port to be opened) .
Login to HCX-Cloud using cloudadmin@vmc.local credentials and navigate “Interconnect” under the infrastructure section, you should see auto-created ‘Compute Profile’ and Network Profile
In “externalNetwork” profile 2 public IP addresses are already allocated.
Go to support section and click on ‘REQUEST DOWNLOAD LINK’. Now, you should be able to download HCX-Connector OVA to deploy at On-Prem vCenter
2nd Section: Deploy HCX at On-Prem vCenter (Source Site) and activate the license
Login to your on-prem vCenter, go to your cluster/resource pool where you want to deploy HCX-Connector, right-click and select “Deploy OVA Template”
Click on “Local file” and select OVA
Input the name of HCX-Connector, choose a location where you want to deploy HCX-Connector
Select a compute resource and click on “NEXT”
Review Details and click on “NEXT”
Accept license agreement and click on “NEXT”
Select the storage where you want to deploy HCX-Connector and click on “NEXT”
Select the management network for HCX-Connector and click on “NEXT”
On the customize template section, input the hostname of HCX-Connector, IP address, prefix length, default gateway (if you scroll down you can see options for DNS, NTP & enable SSH service) and click on “NEXT”
Review all and click on “FINISH”
“Power On” HCX-Connector. This will take 5-20 minutes to finish initiating
Open a new browser tab and connect to https://<HCX-ConnecotrIP>:9443 and supply the admin username and password you set during the HCX OVF deployment
For the license/Activation key, go Back to your VMware cloud on AWS portal, navigate your SDDCs- VIEW DETAILS -Add Ons – OPEN HCX and select the “Activation Keys” Tab and click on “Create Activation Key”, Select Subscription, System Type and click on “CONFIRM”
Once activation Key get generated, copy it to activate on-prem HCX
Now back to your On-Prem HCX, use the activation key that you generated in the previous step and click on “ACTIVATE”
Note: If activation fails make sure https://connect.hcx.vmware.com and https://hybridity-depot.vmware.com are reachable on HTTS (TCP 443) ports. If your environment connects via proxy, go to the Administration tab and set Proxy before activating the License Key
Supply your on-prem location and click on “CONTINUE”
Supply HCX System name and click on “CONTINUE”
It will display HCX successful activation message and click on “YES, CONTINUE” to procced HCX configuration
Supply your On-Prem vCenter URL, user credential, password and click on “CONTINUE”
Supply your PSC information (in my case, psc is embedded so it’s the same as my vCenter)
Click on “RESTART” for your changes to take effect
3rd Section: Configure HCX (Site Paring, Compute & Network Profile & Service Mesh creation), extend L2 network and perform the migration
After the HCX restart, log off your on-Prem vCenter and login and it should have installed the HCX plug-in. Navigate “vSphere Client” and click on “HCX” plug-in
It lands on HCX dashboard section, navigate “Site Pairing” and click on “CONNECT TO REMOTE SITE”
To find your target (Cloud) side HCX details, go back to your VMware Cloud on AWS SDDC portal and click on “Setting”, go to “HCX Information” section and note down or copy HCX FQDN, Public IP (if you are connecting via Direct connection note down Private IP as well), in this blog, I am using the public address for the pairing
Come back to your On-Prem HCX console and supply the target (cloud) HCX URL (if you are connected via Direct connection use Private IP) and cloudamin credentials and click on “CONNECT”
Note:If FQDN can’t be resolved use IP address and make sure firewall rules are in place so your on-prem HCX Connector can talk to the Remote site HCX Cloud via HTTPS (port 443).
Once pairing get successful, you will see Site Pairing status.
In my case, on-prem HCX connector is in Tokyo, Japan and HCX cloud is US West2 Oregon region
A network profile defines a range of IP addresses that can be used for HCX Connector to provide for its virtual appliances (HCX-IX, HCX-WO & HCX-NE)
In my case, I am creating 3 Network Profiles as I mentioned earlier.
MGMT-Network-Profile
vMotion-Network-Profile
Uplink-Network-Profile
In the HCX vCenter plugin, navigate “Interconnect”, select “Network Profile”, click on “CREATE NETWORK PROFILE”
Select “Distributed Port Groups” and select mgmt network from the list (ex: m01-vds01-pg-mgmt), Input the name for the network profile (ex:MGMT-Network-Profile)
Set an IP range for the available IP address. These IP addresses will be assigned from the management network to the Interconnect and Network Extension Appliances (you will need a minimum 2 private IPs). Select the Prefix Length, Gateway IP, DNS and click on “CREATE”
Next, you need to create vMotion Network Profile, select the vMotion network from the list, Input a name for the network profile (ex:vMotion-Network-Profile). These IP addresses will be assigned from the vMotion network to the Interconnect (HCX-Mobility Proxy). You need Minimum 1 vMotion IP address. Set Prefix length and click on “CREATE”
Create Uplink Network Profile, select the Uplink network from the list, and Input a name for the Uplink network profile (ex:Uplink-Network-Profile). These IP addresses will be assigned from the Uplink network to the Interconnect and Network Extension Appliances. You need a Minimum of 2 Uplink IP addresses (for NE HA deployment you need more ips). Set Prefix length, Gateway and click on “CREATE”
Compute Profile describes which HCX services run, and how they are deployed when the Service Mesh is created.
Navigate “Interconnect” section, “Compute Profile” and click on “CREATE COMPUTE PROFILE”
Supply the Compute Profile Name and click on “CONTINUE”
Select the services to be activated and click on “CONTINUE”
Select Service Resources from the drop-down menu and click on “CONTINUE”
Select Deployment Resources (cluster/resource pool, DataStore, folder) to deploy HCX appliances (HCX-IX, HCX-WO & HCX-NE) and click on “CONTINUE”
Set any Interconnect Appliance Reservation Settings if required(it’s optional, I keep it default)
Select the Management Network Profile that you created Network Profile section and click on
“CONTINUE”
Select the vMotion Network Profile that you created Network Profile section and click on “CONTINUE”
Select the Uplink Network Profile that you created Network Profile section and click on “CONTINUE”
Select the vSphere Replication Network Profile that you created Network Profile section and click on “CONTINUE”. In my case I am using same MGMT-Network-Profile for vSphere replication as well
Select the Network Containers drop-down menu and select the Networks that are eligible for HCX Network Extension Operations (only vDS and NSX-T are supported) and click on “CONTINUE”
Review the firewall rules and make sure firewall rules are in place before creating service mesh and click on “CONTINUE”
Click on “FINISH”
You have now created your Compute Profiles, it should be similar as below
Service Mesh specifies a local and remote Compute & Network Profile pair. When a Service Mesh is created, the HCX Service appliances (HCX-IX, HCX-WO & HCX-NE) are deployed on both the source and destination sites and automatically configured by HCX to create the secure optimized transport fabric.
Select “Service Mesh” tab under Interconnect section and click on “CREATE SERVICE MESH”
Select the local site and remote site between which the Mesh needs to be created and click on “CONTINUE”
From the source compute profile drop-down menu, select the “Compute Profile” that you created manually on-prem site in the earlier steps. Select Remote compute profile (this has auto-created on VMware Cloud on AWS side) and click on “CONTINUE”
Select the services that you need to enable on the Service Mesh and click on “CONTINUE”
Select the Source and Destination Uplink Network mapping (if you are using a direct connection select directConnectionNetwork1 for the uplink)
For the NE appliance, choose vDS and appliance count and click on “CONTINUE”
Check “Application Path Resiliency” and TCP Flow Conditioning” box and click on “CONTINUE”
Review the configuration and click on “CONTINUE”
Provide the name for Service Mesh and click on “FINISH”
To view the progress, click on view “Tasks” in the Service Mesh that is being deployed and it should take 20-60 minutes to deploy.
Once successfully deployed you will see all HCX services green under “Service Mesh” section
On-Prem vCenter inventory also you can see those appliances
Go to “Network Extension” section and click on “CREATE A NETWORK EXTENSION”
Select the Service Mesh, Check the “Source Port-group/VLAN” box and click on “NEXT”
Select “Compute Gateway” as Destination First Hop Route. In source Network section, input the default Gateway IP with prefix for that VLAN and chose Compute gateway and click submit.
Note:If you want to MON enabled check “Mobility Optimized Networking” box.
Review the Progress
Make sure L2-EXT is successful
In this section, I am migrating on-Prem workload to VMware cloud on AWS SDDC
Select “Migration” and click on “MIGARATE”
Make sure Source and Destination connection, Input “Group name”, Select vm you want to migrate and click on “ADD”
Select Destination site, Resource pool, folder, Storage and migration option (vMotion, Bulk or RAV)
Make sure Network, click on “VALIDATE”, it will show successful status and click on “GO” for the migration
Verify the Migration progress
During the vMotion migration, it is expected few pings drop