Currently viewing the tag: "VXLAN"

The release and support of VXLAN has raised a great level of interest in the community. As one of the first to deliver content with regards to VXLAN implementation examples I’ve been approach by customer and colleagues with questions around VXLAN architecture designs and use cases. For the most part, I see there is a large audience not really up to speed with the logistics of VXLAN as it relates to vSphere and vCloud infrastructures supported implementation scenarios. The knowledge gap is not entirely around the value of VXLAN technology but more around the architectures and  uses cases where it can successfully be used today.

Based on my experiences lately, the use of VXLAN becomes the topic of conversation whenever connecting multiple data centers where each site has separate vSphere/vCloud Infrastructure becomes to topic of conversation. I can see how some folks have been mislead into thinking VXLAN is the answer for that as they can now leverage VXLAN to connect multiple vCenter Server and vCloud Director infrastructure together today. Well the truth is that plain and simple VXLAN CANNOT be use as the technology to connect multiple vCenter Server and vCloud Director environments today.

The reality is that with the current release of VXLAN there is no supported way of connecting multiple vSphere or vCloud Infrastructures together. Considering the platforms architectures and components both vSphere and vCloud based infrastructures have a lot of dependencies and moving parts which makes it very challenging to integrate in a way that would make that possible today. From what I understand this is something that all of the partners involved with the development of the VXLAN technology are looking to address soon.

One of the publications which I have found a lot of people reading and using as guidance for vCloud Director is the vCloud Architecture Toolkit (vCAT). As part of the vCAT implementation examples we have included a couple of VXLAN scenarios one of them being around disaster recovery and based on conversation this very topic I’ve noticed that some people have overlooked or missed one critical piece of the information discussed and illustrated as part of that example.

As one of the contributors to the vCloud Architecture Toolkit (vCAT) I have to come out and defend some of the misconceptions folks are gathering around the VXLAN implementation example in vCAT topic. The VXLAN disaster recovery example scenario is based around  a stretched cluster scenario, and not two separate infrastructures. Two physical data centers in two different locations with one logical datacenter (vSphere/vCloud) spanning both physical data centers.

Read Full Article →

The new vCloud Director 5.1 delivers many new features and enhancements, one in particular is the introduction and support of Virtual Extensible LAN (VXLAN). VXLAN is a technology that enables the expansion of isolated vCloud architectures across layer 2 domains beyond the limits imposed by the IEEE 802.1Q standard. By utilizing a new MAC-in-UDP encapsulation technique, a VXLAN ID adds a 24 bit identifier, which allows networks to push beyond the IEEE 802.1Q limit to a possible 16 million logical networks. Figure 1 below illustrates the changes added to the Ethernet frame by VXLAN.

Figure 1: Ethernet Frame with VXLAN Encapsulation

 While the conventional IEEE 802.1Q standard works perfectly, when trying to meet greater scalability demands VXLAN surpasses the IEEE 802.1Q limitation by offering scalable capabilities of up to 16 million possible networks. Because of the scalable and flexible capabilities offered by VXLAN, this technology is something to be consider for large scalable Cloud (vCloud) networks. For a quick crash course on VXLAN, take a look at Duncan Epping’s post “Understanding VXLAN and the value prop in just 4 Minutes…”

Configuring VXLAN in vCloud Director 5.1 required some initial steps that are outside of the vCloud Director 5.1 management interface which I want to illustrate here.

First a couple of facts:

A VXLAN network pool is automatically created in vCloud Director 5.1 whenever a Provider vDC is created. If the hosts of any given cluster are not prepared to use VXLAN first, the VXLAN network Pool in vCloud Director will display an error. I would recommend identifying all of the pre-requisites for the use of VXLAN from a network as well as the software dependency perspective before creating new Provider vDC in vCloud Director 5.1.

In order to prepare the resource clusters (hosts) to use VXLAN, log in the vCloud Networking and Security appliance (previously knows as vShield Manager). The preparation of the networks as well as the hosts requires the identification and assignment of the Segment ID Pool and the Multicast addresses. Below are the steps necessary to prepare and configure VXLAN for vCloud Director 5.1.

Step 1: Log into the vCloud Networking and Security appliance. Select the Datacenter. Then, select the Network Virtualization tab on the right side of the screen and click the Preparation hyperlink. This will reveal the Connectivity and Segment ID screen, as illustrated in figure 2.

Figure 2: Network Virtualization Settings

 

 

Step 2: Click the Edit button on the right end of the screen, and enter the required Segment ID Pool, and Multicast address that will be used by vCloud Networking and Security appliance. The Segment ID’s cannot be mapped directly to any one Multicast Address, as the possibility of one-to-one mapping doesn’t exist. This Segment ID and Multicast Address configuration is defined in ranges. Figure 3 illustrates the Segment ID and Multicast Address options.

Figure 3: Segment ID Pool and Multicast Address

 

Step 3: Click on the Connectivity button in the Network Virtualization tab to prepare the resource clusters (hosts) to be part of the VXLAN with vCloud Director. Choose the Distributed switch that is to be associated with the resource cluster, and enter the VLAN ID for the desired network segment that will be used to overlay the VXLAN traffic coming from the Distributed Switches. Figure 4 illustrates the configuration options.

Figure 4: Resource Cluster

 

Step 4: Specify the NIC teaming policy that applies to the respective Distributed Switch configuration, and the MTU settings. The MTU settings for VXLAN default to 1600 bytes due to the VXLAN ID encapsulation technique which increases the size of the packets. This behavior is similar for the configuration of vCDNI in vCloud Director.  vCDNI required the minimum MTU packet configuration of 1524. Overall, the important thing to understand here is the requirement to use jumbo frames across all network devices. Figure 5 illustrates the NIC teaming policies available as well as the default MTU settings and click Finish.

Figure 5: VXLAN Attributes

 

After choosing and completing the specification for the Distributed Switches, the VXLAN vmkernel modules are pushed and enabled on to all of the hosts that are part of the selected cluster. New dvPort Groups and vmknic interfaces are added and automatically created on the Distributed Switch associated to the VXLAN. The new dvPort group can be identified by the unique naming convention vxw-vmknicPg-dvs-xx-xx-xx-xx. Figure 6 offers an example of the adapter configuration.

Figure 6: VXLAN VMkernel Interfaces

 

 

A troublesome results of the automated network configuration process for the vmknics, is that all interfaces will be automatically assigned an IP address based on DHCP. This behavior can become a configuration management issue; unless there is a DHCP server on that network segment (normally the management network), all of the newly created interfaces will receive an IPv4 address within the 169.254/16 prefix that is valid for communication with other devices connected to the same physical link.

This configuration will not work as an IPv4 local addresses are not suitable for communication with devices not directly connected to the same physical or logical link, and are only used where stable, routable addresses are not available. As a result of this configuration the status of the hosts preparation will be displayed as “Not ready” in the vCloud Networking and Security appliance interface. Figure 7 illustrates the issue discussed above.

Figure 7: vmknics IP Address Status

 

 

The solution to this issue is simple: update the vmknics interface with automatically assigned IP with valid addresses. This can be achieved in a manual or automated format. Figure 8 illustrate the results of a successful configuration.

Figure 8: VXLAN Successful Preparation Results

 

Step 5: At this point, all the required network, and hosts preparation for the use of VXLAN with vCloud Director 5.1 have been completed. In order to start using the VXLAN feature in vCloud Director 5.1,  create a Provider vDC.  A VXLAN Pool is automatically created. Figure 9 illustrates the existence of VXLAN capable network pool in the management interface of vCloud Director.

Figure 9: VXLAN Network Pool in vCloud Director 5.1

 

There you have it, folks. You can now proceed with the creation and configuration of Organization, and vApp networks to harness the scalable features delivered by VXLAN in vCloud Director 5.1 infrastructures.

Enjoy!