The release and support of VXLAN has raised a great level of interest in the community. As one of the first to deliver content with regards to VXLAN implementation examples I’ve been approach by customer and colleagues with questions around VXLAN architecture designs and use cases. For the most part, I see there is a large audience not really up to speed with the logistics of VXLAN as it relates to vSphere and vCloud infrastructures supported implementation scenarios. The knowledge gap is not entirely around the value of VXLAN technology but more around the architectures and uses cases where it can successfully be used today.
Based on my experiences lately, the use of VXLAN becomes the topic of conversation whenever connecting multiple data centers where each site has separate vSphere/vCloud Infrastructure becomes to topic of conversation. I can see how some folks have been mislead into thinking VXLAN is the answer for that as they can now leverage VXLAN to connect multiple vCenter Server and vCloud Director infrastructure together today. Well the truth is that plain and simple VXLAN CANNOT be use as the technology to connect multiple vCenter Server and vCloud Director environments today.
The reality is that with the current release of VXLAN there is no supported way of connecting multiple vSphere or vCloud Infrastructures together. Considering the platforms architectures and components both vSphere and vCloud based infrastructures have a lot of dependencies and moving parts which makes it very challenging to integrate in a way that would make that possible today. From what I understand this is something that all of the partners involved with the development of the VXLAN technology are looking to address soon.
One of the publications which I have found a lot of people reading and using as guidance for vCloud Director is the vCloud Architecture Toolkit (vCAT). As part of the vCAT implementation examples we have included a couple of VXLAN scenarios one of them being around disaster recovery and based on conversation this very topic I’ve noticed that some people have overlooked or missed one critical piece of the information discussed and illustrated as part of that example.
As one of the contributors to the vCloud Architecture Toolkit (vCAT) I have to come out and defend some of the misconceptions folks are gathering around the VXLAN implementation example in vCAT topic. The VXLAN disaster recovery example scenario is based around a stretched cluster scenario, and not two separate infrastructures. Two physical data centers in two different locations with one logical datacenter (vSphere/vCloud) spanning both physical data centers.
Recently I’ve been writing and publishing a series of articles on the vSphere Storage Appliance on the VMware Storage blog. There is quite of bit of interest on this solution now that it supports centralize and de-centralized management of multiple VSA Clusters. One of the new features of the VSA 5.1 is that it allows you to centrally manage multiple two node or three node clusters from a single instance of vCenter Server. I’ve covered the VSA centralized management for ROBO topic in lengthy details in the VMware Storage blog so I recommend reading those articles to get up to speed on that.
One of the big topics around the use of VSA for ROBO is around the correct use and deployment of the VSA Cluster Service (VSACS). It’s important to know that all scenarios that will be based on two clusters require the deployment of the VSACS. The requirements for this solution demands for the VSACS to reside outside of the resources provided by the VSA Cluster in order to avoid VSA Cluster service outages. The options available for the deployment of the VSACS range from using an virtual machine, an existing shared physical system or an inexpensive hardware appliance that meets the applications requirements.
So in an effort to simplify implementation and even possibly reduce the cost around the management and implementation of two node VSA cluster scenarios the utilization of the vMA appliance can be considered. The vMA appliance i snow build on top of one of the Linux distribution (SLES) supported by the VSACS and it presents the following benefits to this solution:
- Preconfigured and support Operating system
- Free of charge
- Small footprint
I recently posted an article about Architecting Storage Offering for vCloud Director 5.1. In the article I discussed new architecture considerations for the latest version of vCloud Director.
The middle of the article focuses around the use of Storage Profiles among other vSphere features that can now be leveraged by vCloud Director.
When I referenced the use of Storage Profiles I stated the following:
“The “*(Any)” storage profile is there by default, but it should not be included as part of any PVDC without considering the possible performance and operational risks.”
The reason for my statement was due to the possible risks any vCloud Director infrastructure can be exposed to without the correct use and understanding of the new storage features and capabilities discussed in the article.
As I’ve said before, vCloud Director is now capable of leveraging some of the vSphere storage technologies. For the most part, a majority of the storage related configurations are defined outside of the vCloud Director interface i.e. VM Storage Profile, Storage Clusters, etc. Cormac Hogan wrote an excellent article about the configuration and use of Storage Profiles. It’s a Must Read!
Storage Profiles are defined, organized, and configured in vSphere. The majority of the time we tend to label them by referencing precious metals. An example of that is illustrated in the figure below.
As follow up to my previous post on VMworld 2012 Top 10 Most Popular Sessions, here is my second and the last nominated top 10 session for this year.
This session was called “VMware vSphere Cluster Resource Pools Best Practices”. In this session Frank Denneman and I discuss in detail what to consider and calculate when utilizing Resource Pools. This is a topic of extreme importance which has been covered and discussed many times by many of us in the community. This time around Frank put together a good amount of useful information and point out some of the major pitfalls of the use of resource pools. The dynamic resource management capabilities provided by the vSphere platform adds an immense value to any virtualized infrastructure and everyone should know and understand the details
Now, because of the importance and powerful role resource pools plays in the platform as it relates to performance, service level agreements, and integration with other products and solutions, it’s important to understand what they do, what they can be used for, when to use them, and when not to use them.
One of the biggest and most discussed gotchas related to the use of resources pools has been using them as folders. This type of use is loaded with risks and is the type of discussion Frank and I lead in this session amongst others.
Resource Pools are very useful for many reasons, but in order to use them correctly you have to know and understand the details. Our goal for this session was to deliver the deep technical information about resource pools in a very simplistic, and relatable format that everyone could understand. Basically, Frank talks about algorithms, logic, and numbers and make your brain hurt and then I made everyone hungry by relating the information in the form of pizza and pizza pie consumption.
The session was well accepted and people seemed to enjoy it at both VMworld events. For those of you that didn’t get an opportunity to attend VMworld this year here you go.
Recently one of my previous customer asked me for tips on how to systematically control remote session timeouts to ESXi hosts. The context was around standardizing console sessions timeout across multiple ESXi hosts across an enterprise. This is a common requirement for enterprise environments with regulated security postures. I figured this may be useful, so I decided to share this information to a wider audience than just my customer and good friend Todd (@tdamore).
The security requirement can be satisfied by leveraging a new security advanced setting included in the new vSphere 5.1 platform called “ESXiShellInteractiveTimeOut”. Any vCenter user with elevated privileges (admin level) can leverage the use of an advanced setting called “ESXiShellInteractiveTimeOut” to address the ESXi host remote session timeout systematically. This advanced feature allows you implement a standardized timeout value for interactive session to ESXi hosts. The timeout values could be dictated by a standardized corporate security policies or whatever fits your organization. Overall, the use of this advanced setting could can facilitate automating the termination of idle sessions after a defined period of time (time definition is based in seconds).
Now getting to the Advanced Setting location is very simple, even if your new to the new vSphere Web Client. The screen shots below illustrate the location and configuration option.
Advanced Setting Location
ESXi Advanced Setting Configuration
From what the screen shots illustrate above, the advanced setting are located on a per host basis. Utilizing this setting in large environments can be a difficult to manage if utilized on a per hosts basis and not managed properly. I would recommend deploying this configuration as part of Host Profiles implementation. This would be a simplified, validated, and consistent approach.
The process for adding the “ESXiShellInteractiveTimeOut“ is listed below:
- Go to the advanced settings on ESXi and enter the adequate value for the ESXiShellInteractiveTimeOut
- Create a Host Profile referencing the hosts with the modified “ESXiShellInteractiveTimeOut” settings
- Verify the “ESXiShellInteractiveTimeOut” setting values is listed under the Advanced Configuration Option
- The UserVars.ESXiInteractiveTimeOut should be visible in the Host Profile as illustrated below
Host Profile with UserVars.ESXiShellInteractive
Hope everyone finds this useful and handy.
To get more information on my blog postings follow me on Twitter: @PunchingClouds