What is the Virtual Any Cloud Network?

The Virtual Any Cloud Network Vision

 

 

1stp

The network of the future is software-defined. A Virtual Cloud Network, built on VMware NSX technology, is a software layer from on-prem data center to any cloud, any branch and any infrastructure.

VMware NSX is the foundation of the Virtual Cloud Network which consists of : NSX Data Center, NSX Cloud, App Defense , NSX SD-WAN by VeloCloud and NSX Hybrid Connect.

 

In this blog, I will be focusing on NSX Cloud piece of it to demonstrate its key features and benefits.

 

NSX Cloud

 

NSX Cloud is a secure, consistent foundation that makes managing the networking and security of your workloads living literally on any cloud which includes your on-prem  private cloud and/or the hyper-scale public clouds (AWS, Azure and GCP).

It solves challenges Cloud Architects face when dealing with public clouds which include Lack of real-time visibility in cloud traffic flows, security policy consistency across hybrid clouds, compliance to enterprise security policies and leveraging existing operational tools.

It also adds a new line of revenue for Cloud Service Providers where NSX Cloud can be utilized to offer managed security services for their tenants’ workloads living on any cloud.

 

 

So What Are The Benefits of NSX Cloud in The Virtual Cloud Network?

1. Consistent Security For Apps Running On Any Cloud:

The main benefit in my opinion when utilizing NSX Cloud is the ability to enforce your Firewall security rules from your on-prem NSX Manager to workloads living on-premise and/or Azure and/or AWS and/or Google Cloud Platform using the same firewall rule applied across all clouds.

NSX Cloud brings networking and security capabilities to endpoints across multiple clouds. By integrating with NSX Data Center (deployed on-premises), it enables networking and security management across clouds and data center sites using the same NSX manager deployed on-prem.

Security policy is automatically applied and enforced based on instance attributes and user-defined tags. Policies automatically follow instances when they are moved within and across clouds.​Dynamic Policies can based on

  • VM attributes in cloud – eg VNET ID, Region, Name, Resource Group, etc
  • Customer defined Cloud resource tags

 

NSX_Cloud_usecase1

 

Example:

A 3-tier application (Web VMs / App VMs/ DB VMs) consists of :

  • Multiple Web servers spread across multiple hyperscale clouds: AWS/Azure/GCP.  All of these servers will be tagged its respective native cloud tags: “Web”
  • App server is a kubernetes container living On-Premises and will be tagged with “App”
  • DB Server which consists of multiple VMs living On-Premises and will be tagged with “DB”

All of the above 3 tier app workloads will be tagged with a unique tag that will differentiate the application (in this case FE/APP/DB of website X). We will be using this tag to apply-to field in the DFW rules.

Webservers will need to to communicate with App servers living on AWS on port TCP 8443.

App containers would need to communicate with the DB servers on port MySQL TCP 3306.

Any should be able to communicate with the Web Servers on HTTPS TCP 443.
3 Firewall rules will be needed from the NSX Manager living on-prem that will enforce the above security posture using security tags constructs across ALL of the leveraged clouds.

usecase1

 

2. Manageability and Visibility Across All Public Clouds

 

From the NSX Cloud Service Manager “CSM”, you will have the option to add all your Public Cloud accounts (AWS/Azure/GCP).

Once you add all your public cloud accounts, you will be able to view all of the Accounts, Regions, VPCs/VNETs you own along with their respective workloads from the same User Interface. Think of its a single pane of glass across all accounts across Public Clouds.

NSX_CLOUD_Visibility

This will simplify the manageability of your workloads and save a lot of time in troubleshooting and capacity planning.

 

usecase2

3. Real Time Operation Monitoring With DFW Syslog and L3SPAN in Public Clouds

 

From an operational perspective, you want to make sure all these security groups that you have are active and on. Since the Firewall and Security policies are now managed via the NSX manager, we will have the option to use syslog which look very similar to the logs created by hypervisor.

You can collect the data and send it to your Syslog collector, say Splunk and you will be able to run Data Analysis off of it.

This will provide statistics on who are the VMs talking to , what ports are permitted/denied etc.

 

usecase3

 

 

Real Time Operations visibility can also be complimented by using the L3SPAN in public clouds per the below flow:

  • Enable NSX L3 SPAN/Mirror on a per Logical port or Logical switch basis using Port-mirroring Switching Profile
  • NSX Agent running inside the VM captures the traffic.
  • Mirrored traffic forwarded to IP destination (collector) within VPC using GRE

 

usecase3_2

 

4. Full Security Compliance with the Quarantine Option

 

You will have the option for full compliance for policy enforcement where the VMs that are created in the public cloud which are not managed by NSX will be quarantined.

This is done via the Multi-layered security which provides two independent security perimeters, primary security through NSX firewall, second layer of security through the native public cloud security groups which will be utilized by the NSX.

Note– This is optional. You can still have a dev environment that has both NSX managed and non managed workloads without enforcing the quarantine.

 

usecase4

 

Note: As of the publishing date of this blog, the NSX-T 2.2 GA version fully supports workloads on-prem and Azure.

The upcoming NSX-T Cloud versions will be supporting AWS and other public Clouds.

In Summary

 

The virtual any cloud network is the next generation of cloud Networking. Cloud Admins could now utilize NSX Cloud to have full visibility for cloud traffic flows while the Security admins will enforce  consistent security policies across all clouds.

 

Cloud Service Providers can utilize NSX Cloud to offer managed services for their tenant workloads anywhere, creating a new line of revenue.

 

In my next post, I will discuss the CSM in further details along with deep diving on the architecture of NSX Cloud.

 

8 Best Practices to Achieve Line-Rate Performance in NSX

Every Architect and administrator would love to achieve maximum throughput and hence achieve the optimum performance out of their overlay and underlay networks.

Often Line-rate throughput is the throughput that each Architect/admin would aim for their workloads to leverage. In this blog, I will be sharing the best practices that Architects/admins can follow in order to achieve maximum throughput resulting in optimum performance.

 

So what is line-rate?

Line rate is defined as the actual speed with which the bits are sent onto its corresponding wire (physical layer gross bit rate).

Line-rate or wire-speed means that the workloads (in our case a Virtual Machines) are  supposed to be able to push traffic at the link-speeds of their respective ESXI hosts physical NICs

Its important to note that the maximum achievable throughput will be limited to to the hypervisor’s Physical NIC  throughput along with the VM vNIC throughput (E1000/VMXNET3) irrespective of how massive is the throughput support on your underlay devices ( Switches/Firewalls).

 

Best Practices To Achieve Line-rate:

 

Best Practice 1: Enable RSS on ESXI hosts (prior to ESXI 6.5)

RSS (Receive Side Scaling) looks at the outer packet headers to make queuing decisions.  For traffic going between just two nodes – the only thing different in the out headers is the source port and hence this is not really optimal.

Note: Rx Filters, available since ESX 6.5 as a replacement to RSS, looks at the inner packet headers.  Hence, queuing is lot more balanced.

Like  RSS, NIC cards need to support Rx Filters in hardware and they should have a driver, for it to work. If available, Rx Filters are enabled by default. VMware is working on having Rx Filters listed on the I/O compatibility guide.

 

Without RSS

 

 

With RSS

Best Practice 2: Enable TSO

 

Using TSO (TCP Segmentation Offload) on the physical and virtual machine NICs improves the performance of ESX/ESXi hosts by reducing the CPU overhead for TCP/IP network operations. The host will use more CPU cycles to run applications.

If TSO is enabled on the transmission path, the NIC divides larger data chunks into TCP segments whereas if TSO is disabled, the CPU performs segmentation for TCP/IP.

TSO is enabled at the Physical NIC card. If you are using NSX, make sure to purchase NIC cards that have the capability of VXLAN TSO offload.

TSO

 

Best Practice 3: Enable LRO

 

Enabling LRO (Large Receive Offload) reassembles incoming network packets into larger buffers and transfers the resulting larger but fewer packets to the network stack of the host or virtual machine. The CPU has to process fewer packets when LRO is enabled which reduces its utilization for networking.

LRO is enabled at the Physical NIC card. If you are using NSX, make sure to purchase NIC cards that have the capability of VXLAN LRO offload.

 

LRO

 

Best Practice 4: Use multiple 40Gbps NIC Cards with multiple PCIe busses

 

The more NIC Bandwidth you will have the less bottlenecks you will create. Having multiple PCI-e busses will help  with higher maximum system bus throughput, lower I/O pin count and smaller physical footprint resulting in better performance.

40g

Best Practice 5: Use MTU 9000 in the underlay

 

To achieve maximum throughput (whether on traditional VLAN or VXLAN), having the underlay supporting 9K MTU jumbo frames will have a huge impact in enhancing the throughput. This is will be extremely beneficial when if the MTU on the VM itself has a corresponding 8900 MTU.

 

Best Practice 6: Purchase >= 128 GB physical Memory per host

 

This is a useful best practice for folks having NSX Distributed Firewall  (DFW) Configured. NSX DFW leverages 6 memory heaps for vSIP (VMware Internetworking Service Insertion Platform) where each of those heaps can saturate more efficiently with more physical Memory available to the host.

Note below that each hip uses a specific filter part of the DFW functionality.

  • Heap 1 : Filters
  • Heap 2 : States
  • Heap 3 : Rules & Address Sets
  • Heap 4 : Discovered Ips
  • Heap 5 : Drop flows
  • Heap 6: Attributes

FW heap

 

Best Practice 7: Follow NSX maximum Guidelines

A good best practice is definitely following the maximum tested guidelines.

These guidelines are now publically published by VMware and you can find them via the below link:

NSX_Configuration_Maximums_64.pdf

Maximums

 

 

Best Practice 8: Compatibility Matrix

 

Make sure to check the VMware compatibility matrix for supported NICs:

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io

The driver and firmware versions should be on the latest release and the recipe should match.

Compatibility

You Can also pick the NICs that support VXLAN offload features ( TSO/LRO) using that matrix.

 

In Summary:

Here is a summary of the best practices that need to be followed in order to achieve line rate performance within a vSphere environment running NSX:

summary

 

 

 

 

 

 

 

 

 

 

 

Fixed- vCloud Director 8.20 IPsec Tunnel issues after upgrading to NSX 6.3.x from prior NSX versions

Blog2_3

Some SPs reported losing IPSEC tunnels on their vCD edges after upgrading from NSX 6.2.X to NSX 6.3.x+ with the below error:

“Sending notification
NO_PROPOSAL_CHOSEN to “IP address” 500, Oakley Transform [OAKLEY AES CBC (256), OAKLEY SHA1, OAKLEY GROUP MODP1024] refused due to strict flag, no acceptable Oakley Transform, responding to Main Mode.”

 

After investigating this internally with Engineering, we found out the reason for that is that NSX changed the default Diffie-Hellman to DH-14 from DH-2.

Diffie-Hellman is a protocol used as part of the negotiation for a secure connection

The change was made by the NSX team was obviously for security reasons. However, this change broke the IPSEC tunnels on the vCD edges that are not aware of the change due to the fact that 8.20 vCD has its own Database.

 

The temporary workaround?

To workaround the issue, the admin will have to change the DH manually from DH14 to DH2 from either NSX UI or the VCD UI noting that each time you do redeploy the vCD edge, you will have to change the DH to 2 as config will override based on the service config in the vCD database.

blog2_2Blog2_1

 

 

The Permanent Fix:

The permanent fix is in vCD 9.0 as NSX would be the source of truth in 9.0 even for non advanced edges. With 9.0, we don’t use the service config anymore and doing a redeploy will maintain the state the edge had in NSX.

If you can’t upgrade to vCD 9.0, you can request a hotpatch from GSS for 8.20 that will basically set the vCD edges to be DH2.

 

Important to note if you haven’t upgraded NSX yet:

 

If you are ready to upgrade NSX to 6.3+ and you are still on vCD 8.2, requesting and applying this vCD hot patch prior to the NSX upgrade will reduce downtimes and manual work.

 

Update From Engineering and GSS:

vCD version 8.20.0.3 will have the hot patch described above included. The release is still being tested with QA and will be be launched after it gets the green light from QA.

Comparing Centralized Firewalls to NSX Distributed Firewall DFW – The apples to oranges comparison

Solution Architects and often Security Engineers design Data Centers in a way that they can achieve the highest level of security with the highest performance possible . Often Firewalls are installed and configured to protect workloads from unauthorized access and comply with security policies. VMware introduced the NSX distributed firewall concept which changed the centralized mindset and raised the firewall component to a completely different level.

Although comparing the centralized to distributed firewalls Architecture and capabilities is like comparing apples to oranges, Architects and Network Admins would often request such a comparison to try visualize the new mindset VMware NSX DFW brought into the game.

In the next series of blogs I will show you how NSX DFW compare to the Traditional Centralized Firewalls (The apple to orange comparison). I will also share with you the best practices in achieving Line rate performance/throughput when using NSX DFW along with the results of the performance testings.

So how do Centralized and Distributed Firewalls compare?

Traditionally, Firewalls were centralized and are typically physical boxes that process the packets and take the “allow/drop” decisions based on pre-configured rules. Traffic will be typically hair-pinned to those Firewall boxes when being processed.

VMware NSX Distributed Firewall or often called DFW, introduced a new concept by Distributing the Firewall capability across all compute hypervisors without the need of making the traffic exit to another hop for the allow/drop traffic decision processing .

Traditional FWs will often need the packets sourced/destined to be filtered via the firewall box itself. Hence for large data centers, Firewall throughput is considered a key concern with respect to bottlenecks in the data processing. Scaling a centralized Firewall would often be challenging  whenever the datacenter traffic is exceeding the box’s limit. Network/Security Admins will need to purchase additional firewalls to cascade with the existing ones or often a rip and replace would be needed to accommodate the new demanding throughput needs. (yes

NSX DFW changes the concept of Centralized Firewall and introduced a new perception in the architectural design of Firewalls. With NSX DFW, the Security team can protect the workload at the Virtual Machine’s vNic level. By rules being processed at the vNic, decisions of allowing or dropping packets sourced from the DFW protected VMs is taken even before the packet exits the hypervisor the VM lives on.

Picture1

Traditional FW technologies are fixed based on initial purchase of technology (i.e. 40Gbps FW)

Compared to…

NSX which scales based on the amount of ESXi hosts which already exist in your environment running the VM workloads

Therefore, when we talk about scaling –

  • Traditional FW technologies will require a rip/replace or physical upgrade to mitigate any performance bottlenecks/hairpinning along with potential architecture redesign
  • Compared to VMware NSX which linearly adds performance as we add ESXi hosts to scale VM workloads… not to mention that the ESXI hosts already exist in your Data center (lower CAPEX)

 

as we addNSX performance scales

What is the most powerful differentiator? 

One of the most powerful features of NSX DFW in my opinion is the ability to create firewall rules based on static and dynamic membership criteria. Security groups construct is introduced which is an extremely efficient tool  to implement security policies or firewall rules based on those security groups defined. Security Groups can be leveraged to either create Static or Dynamic Inclusion based rules.

Static inclusion provides the capability to manually include particular objects into the security groups such as Specific Cluster, Logical Switch, vAPP, Data Center , IP Sets, Active Directory group, Mac Sets, Security tag, vNic, VM, Resource Pool and DVS Port Group.

5

Dynamic Inclusion would include Computer OS name, Computer Name, VM name, Security tag and Entity.

RDecker-3

For instance you can create a firewall rule that will allow HTTPS access to all VMs that have the word “web” in their VM name. Or perhaps create firewall rules based on Security tags where a tag can be associated with a specific tenant workloads in the Service Provider world.

 

Ofcourse, The FW rules configured move with the VM as it vMotions across NSX prepared hosts!

 

In Summary:

 

 

1

Traditional FW Technologies  

VMware NSX DFW

 

CLI-Centric FW Model Distributed at hypervisor level
Hair-pinning Mitigation of hair-pinning due to kernel-decision processing vs the centralized model
Static Configuration Dynamic, API based Configuration
IP Address-Based Rules Static and Dynamic Firewall constructs which includes VM Name, VC Objects and Identity-based Rules
Fixed Throughput per Appliance

(i.e. 40Gbps)

Line Rate ~ 20 Gbps per host (with 2 * 10 Gbps pNics).

~ 80 GBps per host (with 2 * 40 Gbps Nic Cards and MTU 8900).

Lack of visibility with encapsulated traffic Full Visibility to encapsulated traffic

 

 

 

In my next blogs, I will show you the testings made to the NSX DFW throughput and what are the best practices to achieve LINE-RATE performance.

 

 

 Big thank you to my peer Daniel Paluszek for motivating me to start blogging and for giving me feedback on this post. You can follow his amazing blog here