Objective 5.2 – Configure VXLAN

Principles

  • Determine areas where VXLANs should be configured
  • Understand physical network requirements for virtual topologies with VXLANs
  • Understand how to prepare a vSphere cluster for VXLAN
  • Determine the appropriate teaming policy for a given implementation
  • Understand how to configure and modify the options of a Transport Zone
  • Understand how prepare VXLAN Tunnel End Points (VTEPs) on vSphere clusters

References

  1. NSX Administration Guide

http://pubs.vmware.com/NSX-62/topic/com.vmware.ICbase/PDF/nsx_62_admin.pdf

  1. NSX Installation Guide

https://pubs.vmware.com/NSX-62/topic/com.vmware.ICbase/PDF/nsx_62_install.pdf

  1. NSX Cross-vCenter Installation Guide

http://pubs.vmware.com/NSX-62/topic/com.vmware.ICbase/PDF/nsx_62_cross_vc_install.pdf

Determine areas where VXLANs should be configured

VXLANs should be configured to stretch the L2 boundary of existing networks without having to make changes to the underlying physical network infrastructure. E.g. A VXLAN can be used to stretch a L2 segment across multiple vCenter clusters without the need to provision additional VLANs. Similarly VXLANs can be used in a Cross-vCenter setup to stretch a L2 segment across geographical boundaries, providing full workload mobility.

Understand physical network requirements for virtual topologies with VXLANs

  • MTU >= 1600
  • For Hybrid Mode: Enable IGMP Snooping and configure an IGMP querier on the transport network
  • For Multicast Mode: configure the transport network to support PIM
  • Ensure appropriate firewall rules are in place to permit traffic flow between the various NSX components and the Physical ESXi Hosts.

Understand how to prepare a vSphere cluster for VXLAN

  • Ensure all ESXi hosts are resolvable in DNS by NSX Manager and Controller (forward and reverse)
  • Configure Transport Zone and add clusters to it
  • Configure a Segment ID Pool
  • From NSX -> Installation -> Host Preparation: select “Install” against each cluster that require VXLANs

Determine the appropriate teaming policy for a given implementation

The following table shows the teaming options available to NSX and level of support.

Teaming and Failover Mode NSX Support Multi VTEP Support Uplink Behaviour
2 x 10G
Route Based on Originating Port Y Y Both Active
Route Based on Source MAC Hash Y Y Both Active
LACP Y N Flow based
Route Based on IP Hash (Static EtherChannel) Y N Flow based
Explicit Failover Order Y N Only one link is active
Route Based on Physical NIC Load (LBT) N N
  • LBT is the only unsupported option
  • The selected teaming option must be the same for all hosts connected to a given vDS
  • The teaming option selected for the transport network must be the same for all clusters
  • LACP is discouraged because is tightly couples the host teaming configuration with the physical network and creates difficulties in Edge routing (e.g. Peering over a vPC)
  • Explicit Failover is simple to configure and operate at the expense of having unused links in standby mode
  • Explicit Failover mode results in 1 x VTEP per Uplink Port
  • vPC/MLAG results in a single VTEP because port channel is seen as a single entity by the host

  • NSX Manager automatically sets the number of VTEPs to the same number of physical NICs available when route based on originating ID or MAC is chosen as the VMKNic teaming policy

  • For Edge Clusters, the recommended teaming policy is “Route based on originating port”
  • Selecting LACP or EtherChannel imposes design restrictions on routing adjacency design

  • Peering over vPC only supported in certain cases e.g. Cisco Nexus 7K r7.2 or Nexus 3K (BGP)
  • Peering over non-vPC or static EtherChannel does not impose this restriction
  • “route based on originating port” is therefore the recommended teaming policy for all clusters in an NSX environment – Compute/Workload and Edge
  • vPC/MLAG restricts design choices but if used, then the teaming policy should be “Static EtherChannel”

Understand how to configure and modify the options of a Transport Zone

See objective 4.4

Understand how to prepare VXLAN Tunnel End Points (VTEPs) on vSphere clusters

  1. All hosts in a cluster must be connected to a common vDS
  2. NSX Manager is installed
  3. NSX Controllers are installed unless using Multicast as the control plane
  4. Decide on uplink NIC teaming policy – use the same teaming policy throughout
  5. Plan IP Addressing – DHCP or IP Pool
    • To use specific IPs use DHCP with static MAC Address mapping
    • Alternatively edit the IP directly in the vmk port after it has been created
  6. Assign transport network VLAN ID. VLAN 0 means frames will go untagged
  7. After hosts have been prepared, select Installation -> Host Preparation and click “Not Configured” in the VXLAN column
  8. Select the appropriate vSwitch, VLAN, MTU (default=1600, min=1572), IP Addressing scheme and Teaming Policy