Objective 4.2 – Deploy VMware NSX Components

Principles

  1. Install/Register NSX Manager
  2. Prepare ESXi Hosts
  3. Deploy NSX Controllers
  4. Understand assignment of Segment ID Pool and appropriate need for Multicast addresses
  5. Install Guest Introspection
  6. Understand when to use IP Pools vs DHCTP for VTEP configuration

References

  • NSX Administration Guide

http://pubs.vmware.com/NSX-62/topic/com.vmware.ICbase/PDF/nsx_62_admin.pdf

  • NSX Installation Guide

https://pubs.vmware.com/NSX-62/topic/com.vmware.ICbase/PDF/nsx_62_install.pdf

General Workflow

From installation guide:

Install/Register NSX Manager

  • Add the NSX Manager to the management vCenter and register with it.
  • Hosts must be connected to a vDS to configure VXLANs

System requirements

Component Size RAM vCPU Disk
NSX Manager < 256 Hypervisors 4GB 4 60GB
> 256 Hypervisors 8GB 8 60GB
Controllers 4GB 4 20GB
Edge Compact 512MB 1 500MB
Large 1GB 2 500MB + 512MB
Quad Large 1GB 4 500MB + 512MB
X-Large 8GB 6 500MB + 1GB
Guest Introspection 1GB 2 4GB
NSX Data Security 512MB 1 6 per ESXi

Registration

Login to the admin console and go to “Manage vCenter Registration” and configure Lookup Service URL + vCenter. The Lookup Service user must be a vCenter SSO Administrator.

Prepare ESXi Hosts

Host preparation is conducted on a Cluster level and installs NSX VIBs on ESXi hosts.

Note: VIBs must be installed manually on stateless ESXi hosts and added to the appropriate image.

Path for host VIBS can be obtained from https://<NSX_MANAGER_IP>/bin/vdn/nwfabric.properties.

Always check the path as it can change across NSX versions.

e.g.

# 6.0 VDN EAM Info

VDN_VIB_PATH.1=/bin/vdn/vibs-6.4.0/6.0-7563456/vxlan.zip

VDN_VIB_VERSION.1=7563456

VDN_HOST_PRODUCT_LINE.1=embeddedEsx

VDN_HOST_VERSION.1=6.0.*

Note: files must be downloaded with a web browser or scp client.

Pre-requisites:

  1. Register NSX Manager with vCenter
  2. Forward and reverse name resolution for NSX Manager. The reverse lookup should return the NSX Manager FQDN.
  3. ESXi hosts can resolve the NSX Manager
  4. Port 80 is open from ESXi hosts to vCenter
  5. vCenter and ESXi hosts clocks match
  6. ESXi cluster hosts are attached to a common vDS i.e. all hosts within a given cluster must be connected to the same vDS or set of vDS switches.
  7. Disable VUM
  8. Resolve any issue reported in the Host preparation tab:
    1. Networking and Security -> Installation and Upgrade -> Host Preparation

Installation

  1. Navigate to: Networking and Security -> Installation and Upgrade -> Host Preparation
  2. Click “Actions -> Install” against each cluster to be prepared for NSX
  3. When deployment is complete, the “Installation Status” column shows the version of NSX deployed on that cluster
  • When a new host is added to an NSX prepared cluster, the required VIBs are automatically installed on it
  • When a host is removed from a cluster, the VIBs are automatically removed
  • Note: hosts must be rebooted following VIB removal to complete the process

Deploy NSX Controllers

Pre-requisites

  • Controller Clusters must always be configured in a set of 3
  • Controller disk peak write latency = 300ms and mean write latency = 100ms
  • NSX Manager deployed and registered with vCenter
  • Determine IP Pool settings for Control Cluster

Deployment Procedure

  • Navigate to Home > Networking & Security > Installation and select the Management tab
  • Click to add a new Controller
  • Controllers should be attached to a non-vxlan port group that has access to NSX Manager and ESXi Hosts
  • The IP Pool can be pre-configured or added as part of the Controller deployment process
  • Configure a DRS Anti-Affinity rule to prevent Controllers from running on the same host

Understand assignment of Segment ID Pool and appropriate need for Multicast addresses

  • VXLAN segments are built between VTEP endpoints e.g. on an ESXi Host
  • Each segment has a unique ID and assigned from a Pool

Pre-requisites

  • The size of the Segment ID Pool determines the number of VXLANs that can be configured
  • Range = 5000-16777215 (approx. 16M)
  • Max range per vCenter = 10,000. (=max number of dvPortGroups)
  • Ensure VNIs do not overlap with existing NSX installations
  • Unicast and Multicast ranges can be added

Configuration

  • Segment ID Pools are configured from:

Home > Networking & Security > Installation -> Logical Network Preparation -> Segment ID

  • Click Edit and configure the range of Unicast or Multicast Pools. The latter is needed if the Transport Zones will use Multicast or Hybrid as replication mode
  • Multicast Addresses prevents a single multicast address from being overloaded and better contains BUM replication
  • When using Multicast or Hybrid mode, multicast traffic is only sent to hosts that have sent IGMP Join messages – otherwise it’s broadcast to all hosts
  • Multicast/Hybrid requires the following to operate correctly:
    • Transport Network MTU >= 1600
    • Enable IGMP Snooping
    • Configure an IGMP Querier on the Transport VLAN
    • Use the recommended multicast range on the transport zone:

Install Guest Introspection

  • Installing the Guest Introspection service deploys:
    • A new VIB on each host in the cluster
    • A service VM on each host in the cluster
  • Required for Activity Monitoring & some 3rd part security solutions
  • Note: Service VMs cannot be VMotioned to another host

Pre-requisites

  • Supported versions of vCenter and ESXi deployed
  • NSX prepared host clusters unless only using GI for Anti-Virus
  • NSX Manager and Hosts are connected to same NTP
  • (Optional) Configure IP Pool for services VMs

Installation

  • Go to: Networking & Security > Installation and Upgrade > Service Deployment and click Add

Step 1: Select Services and schedule: Select Guest Introspection & specify schedule

Step 2: Select Clusters

Step 3: Select Storage and Management Network

There are two options for Datastore and Network:

  1. Pick a Datastore/Network from the drop down
  2. Select “Specified on Host”

If “Specified on-host” then the datastore and network must be configured manually after the deployment completes.

Understand when to use IP Pools vs DHCP for VTEP configuration

IP Addresses for Host preparation (VTEP) may be assigned either from an IP Pool or DHCP.

In the case where clusters are “striped” across racks e.g. 2 hosts per rack and all inter-rack connectivity is routed e.g. Spine/Leaf, then the recommended way to assign VTEPs is through DHCP. This is because the IP Allocation method is defined at the Cluster level in NSX. In using the DHCP option, each host in the cluster will get an IP from the DHCP server configured for that rack (typically through the use of an IP helper address on the top of rack switches). An alternative approach is to allow DHCP to timeout and subsequently manually apply the VTEP IPs through the vSphere Web Client or some scripting approach e.g. powershell.

For the more common topology where the ToR switch operates at L2 only, then a single transport VLAN can be presented to all racks, thereby allowing all hosts within a cluster to operate of the same subnet. In this case IP Address assignment may be done through either DHCP or IP Pool