- Install/Register NSX Manager
- Prepare ESXi Hosts
- Deploy NSX Controllers
- Understand assignment of Segment ID Pool and appropriate need for Multicast addresses
- Install Guest Introspection
- Understand when to use IP Pools vs DHCTP for VTEP configuration
- NSX Administration Guide
- NSX Installation Guide
From installation guide:
Install/Register NSX Manager
- Add the NSX Manager to the management vCenter and register with it.
- Hosts must be connected to a vDS to configure VXLANs
|NSX Manager||< 256 Hypervisors||4GB||4||60GB|
|> 256 Hypervisors||8GB||8||60GB|
|Large||1GB||2||500MB + 512MB|
|Quad Large||1GB||4||500MB + 512MB|
|X-Large||8GB||6||500MB + 1GB|
|NSX Data Security||–||512MB||1||6 per ESXi|
Login to the admin console and go to “Manage vCenter Registration” and configure Lookup Service URL + vCenter. The Lookup Service user must be a vCenter SSO Administrator.
Prepare ESXi Hosts
Host preparation is conducted on a Cluster level and installs NSX VIBs on ESXi hosts.
Note: VIBs must be installed manually on stateless ESXi hosts and added to the appropriate image.
Path for host VIBS can be obtained from https://<NSX_MANAGER_IP>/bin/vdn/nwfabric.properties.
Always check the path as it can change across NSX versions.
|# 6.0 VDN EAM Info
Note: files must be downloaded with a web browser or scp client.
- Register NSX Manager with vCenter
- Forward and reverse name resolution for NSX Manager. The reverse lookup should return the NSX Manager FQDN.
- ESXi hosts can resolve the NSX Manager
- Port 80 is open from ESXi hosts to vCenter
- vCenter and ESXi hosts clocks match
- ESXi cluster hosts are attached to a common vDS i.e. all hosts within a given cluster must be connected to the same vDS or set of vDS switches.
- Disable VUM
- Resolve any issue reported in the Host preparation tab:
- Networking and Security -> Installation and Upgrade -> Host Preparation
- Navigate to: Networking and Security -> Installation and Upgrade -> Host Preparation
- Click “Actions -> Install” against each cluster to be prepared for NSX
- When deployment is complete, the “Installation Status” column shows the version of NSX deployed on that cluster
- When a new host is added to an NSX prepared cluster, the required VIBs are automatically installed on it
- When a host is removed from a cluster, the VIBs are automatically removed
- Note: hosts must be rebooted following VIB removal to complete the process
Deploy NSX Controllers
- Controller Clusters must always be configured in a set of 3
- Controller disk peak write latency = 300ms and mean write latency = 100ms
- NSX Manager deployed and registered with vCenter
- Determine IP Pool settings for Control Cluster
- Navigate to Home > Networking & Security > Installation and select the Management tab
- Click to add a new Controller
- Controllers should be attached to a non-vxlan port group that has access to NSX Manager and ESXi Hosts
- The IP Pool can be pre-configured or added as part of the Controller deployment process
- Configure a DRS Anti-Affinity rule to prevent Controllers from running on the same host
Understand assignment of Segment ID Pool and appropriate need for Multicast addresses
- VXLAN segments are built between VTEP endpoints e.g. on an ESXi Host
- Each segment has a unique ID and assigned from a Pool
- The size of the Segment ID Pool determines the number of VXLANs that can be configured
- Range = 5000-16777215 (approx. 16M)
- Max range per vCenter = 10,000. (=max number of dvPortGroups)
- Ensure VNIs do not overlap with existing NSX installations
- Unicast and Multicast ranges can be added
- Segment ID Pools are configured from:
Home > Networking & Security > Installation -> Logical Network Preparation -> Segment ID
- Click Edit and configure the range of Unicast or Multicast Pools. The latter is needed if the Transport Zones will use Multicast or Hybrid as replication mode
- Multicast Addresses prevents a single multicast address from being overloaded and better contains BUM replication
- When using Multicast or Hybrid mode, multicast traffic is only sent to hosts that have sent IGMP Join messages – otherwise it’s broadcast to all hosts
- Multicast/Hybrid requires the following to operate correctly:
- Transport Network MTU >= 1600
- Enable IGMP Snooping
- Configure an IGMP Querier on the Transport VLAN
- Use the recommended multicast range on the transport zone:
- Starts: 188.8.131.52/24
- Excludes: 184.108.40.206/24
- Do not use: 220.127.116.11/24 and 18.104.22.168/24 as they are used by the physical switches. See https://tools.ietf.org/html/draft-ietf-mboned-ipv4-mcast-unusable-01
Install Guest Introspection
- Installing the Guest Introspection service deploys:
- A new VIB on each host in the cluster
- A service VM on each host in the cluster
- Required for Activity Monitoring & some 3rd part security solutions
- Note: Service VMs cannot be VMotioned to another host
- Supported versions of vCenter and ESXi deployed
- NSX prepared host clusters unless only using GI for Anti-Virus
- NSX Manager and Hosts are connected to same NTP
- (Optional) Configure IP Pool for services VMs
- Go to: Networking & Security > Installation and Upgrade > Service Deployment and click Add
Step 1: Select Services and schedule: Select Guest Introspection & specify schedule
Step 2: Select Clusters
Step 3: Select Storage and Management Network
There are two options for Datastore and Network:
- Pick a Datastore/Network from the drop down
- Select “Specified on Host”
If “Specified on-host” then the datastore and network must be configured manually after the deployment completes.
Understand when to use IP Pools vs DHCP for VTEP configuration
IP Addresses for Host preparation (VTEP) may be assigned either from an IP Pool or DHCP.
In the case where clusters are “striped” across racks e.g. 2 hosts per rack and all inter-rack connectivity is routed e.g. Spine/Leaf, then the recommended way to assign VTEPs is through DHCP. This is because the IP Allocation method is defined at the Cluster level in NSX. In using the DHCP option, each host in the cluster will get an IP from the DHCP server configured for that rack (typically through the use of an IP helper address on the top of rack switches). An alternative approach is to allow DHCP to timeout and subsequently manually apply the VTEP IPs through the vSphere Web Client or some scripting approach e.g. powershell.
For the more common topology where the ToR switch operates at L2 only, then a single transport VLAN can be presented to all racks, thereby allowing all hosts within a cluster to operate of the same subnet. In this case IP Address assignment may be done through either DHCP or IP Pool