Security Compliance – Pre-configure your vApp firewall rules inside vCloud Director using NSX DFW (Part 2 of 2)

This is a joint blog series with Daniel Paluszek .

We will be discussing  how to achieve security compliance inside vCloud Director’s  future workloads  by pre-configuring native NSX DFW rules.

The use-case came from a new cloud provider that wants to deploy a specific vApp when they onboard a new tenant (or organization). This specific vApp will need to have NSX Distributed Firewall rules in place – the vApp will be the same for every tenant and will need to be secured accordingly and hence achieve compliance.

 

Daniel discussed a great method of creating an NSX security group with Dynamic Membership and that uses VM name as the dynamic construct.The caveat in that is the Provider/Tenant MUST create the vApps with names that match the dynamic membership created natively in NSX.

 

My method would be based on using the Resource Pool ORG-ID  instead of the VM name as the construct used in my dynamic membership criteria with the Caveat that the Provider will have to add each ORG that needs such compliance to this security group.

Tenants can pick any name for their workloads vApps/VMs and still comply with the Firewall rules that are pre-enforced.

 

 

 

Overall Steps:

  1. Creation of a Security Group with Dynamic Membership
  2. Creation of a Security Policy
  3. Activating Security Policy against Security Group
  4. Creating vApp that meets dynamic membership criteria

 

Creation of a Security Group with Dynamic Membership

  1. Navigate to Menu -> Networking and Security -> Service Composer
  2. The first thing we are going to do is create a Security Group that will associate the dynamic membership based on entity criteria.

 


3. We want to apply this group to any VM inside the Org vDC. For that, we will be using Resource Pool Org-ID entity in the dynamic membership criteria.

ORG-IDs are generated for each Org vDC inside vCD. We can be import them as a VC object inside Dynamic Security Group Membership by using the “Resource Pool” as an entity . Hence, the Org vDC should pre-exist in order to pre-configure the native Firewall rules to achieve the compliance needed or we can simply add the newly created ones to the same security group on day 2.

 

Note that you will have to include every Org vDC you create/created that will need this kind of security compliance. 

 

resource poolRP2

 

4. Click Finish, and we are off to the next step.

 

Creation of a Security Policy

  1. Let’s click on the Security Policies tab inside of Service Composer and create a new Security Policy –

2.  give it a name – We are using Standard vApp DFW Rules. 

3. From here, we can click on Firewall Rules and create our rules. In our example, We are going to let HTTPS traffic in and block everything else. Typically, for micro-seg rules, we would create granular rules to secure all types of traffic. We are using these just as an example.

4. Creating DFW policies is fairly straightforward in the Service Composer –

Activating Security Policy against Security Group

  1. Now, we are ready to apply our newly created policy to our group. Click the Apply button while your newly created policy is selected – 
  2. From the pop-up window, we will select our Standard vApp Rule group as a Selected Object – 
  3. Success – now we can see it has been applied – 
  4. From the DFW view, we can see a new section created with associated DFW rules – 

 

 

No need to go through specifics of creating the vAPP that meets criteria as all vAPPs/VMs that are part of the pre-provisioned Org ID will automatically have the security compliance. However, take note that you will need to add each ORG-VDC’s resource pool into that policy to achieve compliance.

 

While this is not the only path of securing Provider-managed VM’s for a tenant. Check out Daniel’s bog post for his approach!

8 Best Practices to Achieve Line-Rate Performance in NSX

Every Architect and administrator would love to achieve maximum throughput and hence achieve the optimum performance out of their overlay and underlay networks.

Often Line-rate throughput is the throughput that each Architect/admin would aim for their workloads to leverage. In this blog, I will be sharing the best practices that Architects/admins can follow in order to achieve maximum throughput resulting in optimum performance.

 

So what is line-rate?

Line rate is defined as the actual speed with which the bits are sent onto its corresponding wire (physical layer gross bit rate).

Line-rate or wire-speed means that the workloads (in our case a Virtual Machines) are  supposed to be able to push traffic at the link-speeds of their respective ESXI hosts physical NICs

Its important to note that the maximum achievable throughput will be limited to to the hypervisor’s Physical NIC  throughput along with the VM vNIC throughput (E1000/VMXNET3) irrespective of how massive is the throughput support on your underlay devices ( Switches/Firewalls).

 

Best Practices To Achieve Line-rate:

 

Best Practice 1: Enable RSS on ESXI hosts (prior to ESXI 6.5)

RSS (Receive Side Scaling) looks at the outer packet headers to make queuing decisions.  For traffic going between just two nodes – the only thing different in the out headers is the source port and hence this is not really optimal.

Note: Rx Filters, available since ESX 6.5 as a replacement to RSS, looks at the inner packet headers.  Hence, queuing is lot more balanced.

Like  RSS, NIC cards need to support Rx Filters in hardware and they should have a driver, for it to work. If available, Rx Filters are enabled by default. VMware is working on having Rx Filters listed on the I/O compatibility guide.

 

Without RSS

 

 

With RSS

Best Practice 2: Enable TSO

 

Using TSO (TCP Segmentation Offload) on the physical and virtual machine NICs improves the performance of ESX/ESXi hosts by reducing the CPU overhead for TCP/IP network operations. The host will use more CPU cycles to run applications.

If TSO is enabled on the transmission path, the NIC divides larger data chunks into TCP segments whereas if TSO is disabled, the CPU performs segmentation for TCP/IP.

TSO is enabled at the Physical NIC card. If you are using NSX, make sure to purchase NIC cards that have the capability of VXLAN TSO offload.

TSO

 

Best Practice 3: Enable LRO

 

Enabling LRO (Large Receive Offload) reassembles incoming network packets into larger buffers and transfers the resulting larger but fewer packets to the network stack of the host or virtual machine. The CPU has to process fewer packets when LRO is enabled which reduces its utilization for networking.

LRO is enabled at the Physical NIC card. If you are using NSX, make sure to purchase NIC cards that have the capability of VXLAN LRO offload.

 

LRO

 

Best Practice 4: Use multiple 40Gbps NIC Cards with multiple PCIe busses

 

The more NIC Bandwidth you will have the less bottlenecks you will create. Having multiple PCI-e busses will help  with higher maximum system bus throughput, lower I/O pin count and smaller physical footprint resulting in better performance.

40g

Best Practice 5: Use MTU 9000 in the underlay

 

To achieve maximum throughput (whether on traditional VLAN or VXLAN), having the underlay supporting 9K MTU jumbo frames will have a huge impact in enhancing the throughput. This is will be extremely beneficial when if the MTU on the VM itself has a corresponding 8900 MTU.

 

Best Practice 6: Purchase >= 128 GB physical Memory per host

 

This is a useful best practice for folks having NSX Distributed Firewall  (DFW) Configured. NSX DFW leverages 6 memory heaps for vSIP (VMware Internetworking Service Insertion Platform) where each of those heaps can saturate more efficiently with more physical Memory available to the host.

Note below that each hip uses a specific filter part of the DFW functionality.

  • Heap 1 : Filters
  • Heap 2 : States
  • Heap 3 : Rules & Address Sets
  • Heap 4 : Discovered Ips
  • Heap 5 : Drop flows
  • Heap 6: Attributes

FW heap

 

Best Practice 7: Follow NSX maximum Guidelines

A good best practice is definitely following the maximum tested guidelines.

These guidelines are now publically published by VMware and you can find them via the below link:

NSX_Configuration_Maximums_64.pdf

Maximums

 

 

Best Practice 8: Compatibility Matrix

 

Make sure to check the VMware compatibility matrix for supported NICs:

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io

The driver and firmware versions should be on the latest release and the recipe should match.

Compatibility

You Can also pick the NICs that support VXLAN offload features ( TSO/LRO) using that matrix.

 

In Summary:

Here is a summary of the best practices that need to be followed in order to achieve line rate performance within a vSphere environment running NSX:

summary