Currently viewing the tag: "vCloud Director 5.1"

I recently posted an article about Architecting Storage Offering for vCloud Director 5.1. In the article I discussed new architecture considerations for the latest version of vCloud Director.

The middle of the article focuses around the use of Storage Profiles among other vSphere features that can now be leveraged by vCloud Director.

When I referenced the use of Storage Profiles I stated the following:

“The “*(Any)” storage profile is there by default, but it should not be included as part of any PVDC without considering the possible performance and operational risks.”

The reason for my statement was due to the possible risks any vCloud Director infrastructure can be exposed to without the correct use and understanding of the new storage features and capabilities discussed in the article.

As I’ve said before, vCloud Director is now capable of leveraging some of the vSphere storage technologies. For the most part, a majority of the storage related configurations are defined outside of the vCloud Director interface i.e. VM Storage Profile, Storage Clusters, etc. Cormac Hogan wrote an excellent article about the configuration and use of Storage Profiles. It’s a Must Read!

Storage Profiles are defined, organized, and configured in vSphere. The majority of the time we tend to label them by referencing precious  metals. An example of that is illustrated in the figure below.

Read Full Article →

Prior to the release of the vCloud Suite and vCloud Director 5.1, the discussions for architecting cloud as it relates to vCloud Director and storage offerings were based around tiered models focused on performance and capacity.

The access to tiered offerings was previously achieved by creating multiple provider virtual datacenters (PVDC) that would consist of different storage characteristics that could be offered to different tenants.

Storage offerings revolve around disk types, protocols, capacity, performance, and other items which are then bundled into a service level grouping. The majority of the time they are packaged and labeled as a precious metal or tiered level (e.g Gold, Silver, Bronze, or Tier 1, Tier 2, Tier 3).

Predominantly the goal is to design storage offerings comprised of multiple or even a single service that would satisfy any tenants application’s requirements. In the previous version of vCloud Director multi-tier storage offerings were designed and made accessible via separate PVDC constructs. The illustration below is an example of the approach utilized with the previous version of vCloud Director.

Because vCloud Director 5.1 is now able to leverage some of the vSphere storage features such as Storage Profiles and Datastore Clusters, the approach for architecting storage offerings should be revised.  Storage Profiles and Storage Clusters are native features of the vSphere core platform but not of vCloud Director. This means that most of the decisions made related to storage features and storage hardware design are made at the vSphere layer to a certain degree.

Multiple PVDC Design

 

By leveraging Storage Profiles and Datastore Clusters it’s now possible to design single instances of vCloud Director PVDC capable of providing multi-tier storage offerings. This approach could reduce the complexity previously experienced by using multiple PVDC designs for multi-tier storage offerings. This new approach could also have an impact by improving operation efficiency and deployment accuracy.

From a manageability standpoint, the use of both storage features could possibly have an impact with initial implementation effort. The storage array properties and capabilities can be systematically identified with use of the vSphere Storage API for Storage Awareness (VASA). VASA can take care of the datastore classification profile from a performance standpoint, but in the event that the vSphere Storage API for Storage Awareness are not leveraged, then manually user defined storage capabilities is the other option to get the job done, and this will incur a bigger effort.

Leveraging both features at the vSphere layer would allow vCloud Director to simply utilize that multi-tier storage design. Storage Profiles will take care of the vApp/workload performance related part of the design, while Datastore Clusters can be utilized to organize or group datastores based on their specific Storage Profiles or Tiers.  Another possible benefit of this is that this approach could also serve as a risk mitigation strategy for the deployment of vApps and their respective workloads, as vApps/workloads will be forced to stay or move to a predefined storage cluster and remain on compliant datastores. The image below illustrates the use of Storage Profiles and Storage Clusters in the vSphere Web Client.

Storage Profiles, and Storage Clusters in vSphere Web Client

The three Storage Profiles and Datatore clusters illustrated above will be available  in vCloud Director during the creation a PVDC. When creating a PVDC you can select the options that are applicable to a specific service offering. The image below illustrates the options presented to vCloud Director during the creation of a PVDC.

Provider VDC Creation in vCloud Director 5.1

 

The “*(Any)” storage profile is there by default, but it should not be included as part of any PVDC without considering the possible performance and operational risks. This is something that I will cover in a future and separate blog post.  Once the PVDC’s have been created, the utilization and capacity metrics can be tracked directly in vCloud Director as well as in vSphere. The images below illustrate the Storage Profiles and Datastore Cluster view within vCloud Director 5.1.

vCloud Director Storage Profile View

 

vCloud Director Datastore Cluster View

One of the goals for any architecture design is to always try to reduce the level of complexity whenever and wherever possible. It’s always good to balance how the technology used will impact design from an operations perspective. You can now explore the possibilities of streamlining storage offering designs by considering vCloud Director’s capabilities to leverage vSphere features and end up with less complex solutions as the one discussed here and illustrated below.

New PVDC Design

I hope some of you folks out there finds this post helpful and useful in your cloud design journeys.

Enjoy!

Get notification of my blog postings by following me on Twitter: @PunchingClouds

The new vCloud Director 5.1 delivers many new features and enhancements, one in particular is the introduction and support of Virtual Extensible LAN (VXLAN). VXLAN is a technology that enables the expansion of isolated vCloud architectures across layer 2 domains beyond the limits imposed by the IEEE 802.1Q standard. By utilizing a new MAC-in-UDP encapsulation technique, a VXLAN ID adds a 24 bit identifier, which allows networks to push beyond the IEEE 802.1Q limit to a possible 16 million logical networks. Figure 1 below illustrates the changes added to the Ethernet frame by VXLAN.

Figure 1: Ethernet Frame with VXLAN Encapsulation

 While the conventional IEEE 802.1Q standard works perfectly, when trying to meet greater scalability demands VXLAN surpasses the IEEE 802.1Q limitation by offering scalable capabilities of up to 16 million possible networks. Because of the scalable and flexible capabilities offered by VXLAN, this technology is something to be consider for large scalable Cloud (vCloud) networks. For a quick crash course on VXLAN, take a look at Duncan Epping’s post “Understanding VXLAN and the value prop in just 4 Minutes…”

Configuring VXLAN in vCloud Director 5.1 required some initial steps that are outside of the vCloud Director 5.1 management interface which I want to illustrate here.

First a couple of facts:

A VXLAN network pool is automatically created in vCloud Director 5.1 whenever a Provider vDC is created. If the hosts of any given cluster are not prepared to use VXLAN first, the VXLAN network Pool in vCloud Director will display an error. I would recommend identifying all of the pre-requisites for the use of VXLAN from a network as well as the software dependency perspective before creating new Provider vDC in vCloud Director 5.1.

In order to prepare the resource clusters (hosts) to use VXLAN, log in the vCloud Networking and Security appliance (previously knows as vShield Manager). The preparation of the networks as well as the hosts requires the identification and assignment of the Segment ID Pool and the Multicast addresses. Below are the steps necessary to prepare and configure VXLAN for vCloud Director 5.1.

Step 1: Log into the vCloud Networking and Security appliance. Select the Datacenter. Then, select the Network Virtualization tab on the right side of the screen and click the Preparation hyperlink. This will reveal the Connectivity and Segment ID screen, as illustrated in figure 2.

Figure 2: Network Virtualization Settings

 

 

Step 2: Click the Edit button on the right end of the screen, and enter the required Segment ID Pool, and Multicast address that will be used by vCloud Networking and Security appliance. The Segment ID’s cannot be mapped directly to any one Multicast Address, as the possibility of one-to-one mapping doesn’t exist. This Segment ID and Multicast Address configuration is defined in ranges. Figure 3 illustrates the Segment ID and Multicast Address options.

Figure 3: Segment ID Pool and Multicast Address

 

Step 3: Click on the Connectivity button in the Network Virtualization tab to prepare the resource clusters (hosts) to be part of the VXLAN with vCloud Director. Choose the Distributed switch that is to be associated with the resource cluster, and enter the VLAN ID for the desired network segment that will be used to overlay the VXLAN traffic coming from the Distributed Switches. Figure 4 illustrates the configuration options.

Figure 4: Resource Cluster

 

Step 4: Specify the NIC teaming policy that applies to the respective Distributed Switch configuration, and the MTU settings. The MTU settings for VXLAN default to 1600 bytes due to the VXLAN ID encapsulation technique which increases the size of the packets. This behavior is similar for the configuration of vCDNI in vCloud Director.  vCDNI required the minimum MTU packet configuration of 1524. Overall, the important thing to understand here is the requirement to use jumbo frames across all network devices. Figure 5 illustrates the NIC teaming policies available as well as the default MTU settings and click Finish.

Figure 5: VXLAN Attributes

 

After choosing and completing the specification for the Distributed Switches, the VXLAN vmkernel modules are pushed and enabled on to all of the hosts that are part of the selected cluster. New dvPort Groups and vmknic interfaces are added and automatically created on the Distributed Switch associated to the VXLAN. The new dvPort group can be identified by the unique naming convention vxw-vmknicPg-dvs-xx-xx-xx-xx. Figure 6 offers an example of the adapter configuration.

Figure 6: VXLAN VMkernel Interfaces

 

 

A troublesome results of the automated network configuration process for the vmknics, is that all interfaces will be automatically assigned an IP address based on DHCP. This behavior can become a configuration management issue; unless there is a DHCP server on that network segment (normally the management network), all of the newly created interfaces will receive an IPv4 address within the 169.254/16 prefix that is valid for communication with other devices connected to the same physical link.

This configuration will not work as an IPv4 local addresses are not suitable for communication with devices not directly connected to the same physical or logical link, and are only used where stable, routable addresses are not available. As a result of this configuration the status of the hosts preparation will be displayed as “Not ready” in the vCloud Networking and Security appliance interface. Figure 7 illustrates the issue discussed above.

Figure 7: vmknics IP Address Status

 

 

The solution to this issue is simple: update the vmknics interface with automatically assigned IP with valid addresses. This can be achieved in a manual or automated format. Figure 8 illustrate the results of a successful configuration.

Figure 8: VXLAN Successful Preparation Results

 

Step 5: At this point, all the required network, and hosts preparation for the use of VXLAN with vCloud Director 5.1 have been completed. In order to start using the VXLAN feature in vCloud Director 5.1,  create a Provider vDC.  A VXLAN Pool is automatically created. Figure 9 illustrates the existence of VXLAN capable network pool in the management interface of vCloud Director.

Figure 9: VXLAN Network Pool in vCloud Director 5.1

 

There you have it, folks. You can now proceed with the creation and configuration of Organization, and vApp networks to harness the scalable features delivered by VXLAN in vCloud Director 5.1 infrastructures.

Enjoy!

 

 

Those who attended VMworld 2012 last week and witnessed the launch of the new vCloud Suite, may have heard about the changes and new technology features added to the products in the platform.

One of the features mentioned during the presentation “Architecting a Cloud Infrastructure”, which was delivered by @DuncanYB, @AidersD, @DaveHill99, and @ccolotti, and myself, is the high availability setting for the vShield Edge devices.

Prior to the release of the vCloud Suite, the recommendations used FT to protect the vShield Manager appliance in order to mitigate the single point of failure in the vCloud and security domain. The new release of vCloud Director 5.1 and vShield 5.1 provides capabilities to mitigate those risks, but only for the vShield Edge Gateway devices that are deployed within the cloud.

One of the many noticeable changes introduced with the new release of the vCloud Suite and the vShield Manager appliance is that the appliance is now deployed with 2 vCPUs. While the support of FT for workloads with multiple vCPUs is under technical review, FT is currently not supported with multiple vCPU workloads.

Because of the workload design change in the vShield appliance, the previous recommendation, which required using FT to protect the vShield Manager appliance, no longer applies. Because of this change, there has to be an immediate focus on the proper backup and maintenance procedures of this key component of the vCloud infrastructure. vShield backups configuration options are illustrated in the screen shot below.

vShield Backup Configuration Options

The new vShield Edge Gateway design leverages a new active/passive architecture for the deployment of vShield Edge Gateway devices within vCloud Director 5.1. This new design provides better scalability and flexibility for variably sized environments, and it reduces the risks of security service outages in vCloud environments. While leveraging the new vShield Edge Gateway HA capability addresses availability concerns, it is important to plan for the effects, which include an impact on capacity management, resource allocation, and overall infrastructure manageability. The screen shot below illustrates the wizard in vCloud Director with the option to deploy the vShield Edge Gateway device with HA mode.

vCloud Wizard: vShield Edge Gateway device HA Option

There are two different options for deploying vShield Edge Gateway devices within vCloud Director 5.1: Compact and Full.  While the use cases for these different options are outside the scope of this post, I do want to point out the effect this will have in terms of resource capacity consumption.

Each vShield Edge Gateway device is comprised of certain amounts of resource reservation, which should be considered when decisions are made to deploy vShield Edge Gateway devices in HA mode. A vShield Edge Gateway Compact instance is assigned a memory reservation of 128 MB, and a CPU reservation of 64 MHz. While a vShield Edge Gateway Full instance contains a memory reservation of  512 MB, and a CPU reservation of 128 MHz.

vShield Edge Gateway Devices Resource Reservation (Compact, and Full)

From a capacity perspective, vCloud architectures with large numbers of organizations and networks would be heavily impacted. Such architectures are impacted the most because the usable capacity of a provider virtual datacenter is reduced by the resource reservation of each vShield Edge Gateway device. As an unplanned consequence of using vShield Edge Gateway devices in HA mode, a large amount of resource capacity will be utilized to maintain the infrastructure, thus depleting the capacity for workloads in the cloud.

Keeping this consequence in mind, vCloud Provider vDCs and allocation models should be analyzed and adequately planned and designed for the support of vShield Edge Gateway in HA configurations. The resources allocated to all of the vShield Edge Gateway devices are assigned from the Providers vDCs, and they can be found in a Resource Pool construct named “System vDC” of each vCloud organization.

System vDC Resource Pool Construct

Capacity management is key for the success of every virtualized environment. It’s also crucial to understand the positive and negative effects of different configurations made to any environment. Because of the dynamic nature of resources allocation in vCloud, its important to understand how the utilization of some of the new features can affect the services provided to customers, as well as provider’s resource procurement and management cycles.

Indeed, vShield Edge Gateway HA mode improves availability, but at the cost of capacity. In virtualized and cloud infrastructure architectures, everything is relational – the more we consider the relationship between different cloud designs and configurations the better our solutions will be. Enjoy