Objective 8.1 – Differentiate single and Cross-vCenter NSX deployments


  1. Understand the benefits/use cases for Cross-vCenter NSX
  2. Contrast single and Cross-vCenter deployment models
  3. Determine the appropriate NSX topology for a given use case
  4. Understand options for ingress and egress traffic flows in a multi-site topology
  5. Describe and differentiate Universal components
    1. Universal Firewall rules
    2. Universal Network and Security objects
    3. Universal Logical Switches
    4. Universal Distributed Logical Routers


  1. NSX Administration Guide


  1. NSX Cross-vCenter Installation Guide


Understand the benefits/use cases for Cross-vCenter NSX

  • Centrally manage multi-vCenter NSX environments
  • Multi-vCenter required to:
    • Overcome vCenter Server scaling limits
    • Separate environments e.g. business unit, tenant, organization, or environment type
    • Accommodate products that require dedicated or multiple vCenter Server systems, such as Horizon View or Site Recovery Manager
  • Cross-vCenter NSX available from NSX 6.2 onwards
  • Universal objects configured on primary NSX Manager are synchronised across all vCenters in the environment

Cross-vCenter NSX features:

  • Increased span of NSX logical networks

Logical Networks available on all vCenters in the environment

  • Centralized security policy management

Firewall rules managed centrally and applied to VM regardless of vCenter managing it

  • Support long distance vMotion across logical switches
  • Enhanced multi-site environment support

Up to 150ms RTT for active-active and active-passive Datacenters

Cross-vCenter NSX benefits:

  • Centralized management of universal objects
  • Workload mobility across sites
  • Enhanced disaster recovery capabilities.

Contrast single and Cross-vCenter deployment models

  • Single Site:
    • NSX environments are completely separate
    • Logical Switches span a single vCenter only
    • Distributed Firewall Rules apply to a single vCenter
    • Failover between sites requires configurations to be duplicated manually
    • Separate controllers deployed at each site
  • Cross-vCenter:
    • Operates with 2 or more NSX Managers
    • A single (3 node) cluster of Universal Controllers is deployed
    • One NSX Manager is assigned primary and is the master for all Universal objects
    • All other NSX Managers assume the Secondary role
    • Local objects can still be configured on each NSX Manager for items that do not need to be mobile e.g. Perimeter ESGs

Determine the appropriate NSX topology for a given use case

  • Use Cross-Site NSX for multi-site resiliency
  • Can be deployed in a single or multiple site configuration
  • Max RTT = 150ms
  • Use a single SSO domain to run vCenter in enhanced linked mode to avoid having to login to each vCenter separately
    • NSX Managers can be managed centrally when ELM is configured
  • Can mix Universal and Local objects e.g. Universal for workloads requiring mobility + Local for static workloads
  • Use Universal Logical Routers (ULRs) for Universal VXLANs
  • NSX Edges are always local to a site
  • Only a single Universal Transport Zone can be configured
  • Separate “Global” transport zones can be deployed for local workloads on each NSX Manager separately
  • Universal Security Objects can be used in the Distributed Firewall

Understand options for ingress and egress traffic flows in a multi-site topology

Active-Active with Local Egress

  • North/South traffic flows from either site
  • A separate appliance is deployed at each site. Control VM on Site-B deployed manually

  • Routes learned by each UDLR appliance are associated with the Locale ID associated with that site
  • Locale ID = NSX Manager UUID but can also be set at the cluster level (NSX -> Installation -> Host Preparation)
  • Hosts is Site-A use the egress routes provided by the Site-A UDLR Appliance
  • Hosts is Site-B use the egress routes provided by the Site-B UDLR Appliance

  • Can be used for Active/Active or Active/Passive configurations
    • Usually used for Active/Active
    • For Active/Passive, requires manual intervention to set the workload cluster Locale ID at Site-B to match that of Site-A to force egress from Site-A
    • Upon failover, the Locale of Site-B clusters needs to be updated to match Site-B to force traffic to egress from Site-B
  • Locale ID can be set at:
    • Site Level

Default behaviour

Site hosts site inherit locale ID of local NSX Manager

    • Cluster Level

Useful in DR scenarios

    • Host Level

Useful for single vCenter designs where clusters are stretched across two sites

    • UDLR Level

For inheritance of locale ID for static routes
Locale ID must be changed in NSX Manager on the UDLR

    • Static Route Level

Only supported in scenarios with no control VM

Locale ID must be changed in NSX Manager on the specific static route


  • North/South traffic flows from a single site at a time
  • Upon failover (or failback) N/S traffic is switched along with workloads
  • Can be achieved either with Local Egress or standard routing metrics
  • A single UDLR Control VM is deployed at Site-A
    • UDLR status at Site-A = Deployed
    • UDLR status at Site-B = Active
  • Egress traffic is controlled with BGP Weights
    • A higher weight is applied to Site-A, thereby forcing all egress traffic out of Site-A
    • Upon failover, the weights can be reversed to force traffic out of Site-B

  • Ingress traffic can be controlled in multiple ways:
    • Configure AS Path Prepend on the physical network to prefer routes from Site-A

When Site-A fails, traffic is automatically re-routed to Site-B

    • Apply BGP filters to ESGs

Requires manual or scripted configuration of the ESGs to prevent Site-B from advertising workload networks

Upon failover, the filtering must be reversed i.e. routes are advertised from Site-B and filtered at Site-A. Only needed if Site-A ESGs remain online

    • Disable Route-Redistribution on the standby Site

Prevents routes from being advertised from Site-B

Results in slightly higher re-convergence time

    • Disconnect standby site ESGs

Disconnect the Transit Network interfaces on the ESGs to prevent routes from being advertised

Describe and differentiate Universal components

Universal Firewall rules

  • Only one L2 and one L3 Universal Firewall Rule section permitted
  • Universal rules are automatically synchronised to all NSX Managers
  • Universal rules can be viewed but not modified in Secondary NSX Managers
  • Network and security objects for use Universal Firewall rules
    • Universal IP Sets
    • Universal MAC Sets
    • Universal Security Groups
      • Only universal IP sets, universal MAC sets, and Universal Security Groups
      • No dynamic membership
      • Cannot be created from Service Composer
    • Universal Services
    • Universal Service Groups
  • Distributed Firewall features not supported in cross-vCenter NSX
    • Exclude list
    • SpoofGuard
    • Flow monitoring for aggregate flows
    • Network service insertion
    • Edge Firewall
    • Service Composer

Universal Network and Security objects

Objects supported for Universal Firewall rules

  • Source and Destination
    • Universal MAC Set
    • Universal IP Set
    • Universal Security Group

Can contain an IP set, MAC set, or Universal Security Group

    • Universal logical switch
  • Applied To
    • Universal logical switch
    • Distributed Firewall – applies rules on all clusters on which Distributed Firewall is installed
  • Services
    • Pre-created universal services and service groups
    • User created universal services and service groups
  • Universal Security Groups (USGs) can have the following
    • Universal IP Sets
    • Universal MAC Sets
    • Universal Security Groups
    • Universal Security Tags
    • Dynamic Criteria

Universal Logical Switches

  • Span multiple sites
  • Created when a Logical Switch is added to a Universal Transport Zone
    • Universal transport zone can include any cluster in the environment
  • Universal Segment ID Pool must not overlap with Segment ID Pools
    • Note that local ID pools are configured separately in each NSX Manager
  • Use a UDLR to route between Universal Logical Switches
  • Universal -> Local Logical Switch routing must pass through an ESG

Universal Distributed Logical Routers

  • Provide centralized administration and a routing configuration
  • Local Egress can be enabled upon UDLR creation and cannot be changed after
  • Local Egress Locale ID is inherited from NSX Manager UUID but can be over-ridden at:
    • Universal logical router
    • Cluster
    • ESXi host