Virtual SAN_Stretched_ClusterHot off the press!!! and ready just in time for your turkey day break so you can feed your brain a bit more. The new Virtual SAN Stretched Cluster Bandwidth Guidelines white paper is officially available.

Earlier this summer when the feature was announced with the release of Virtual SAN 6.1, my buddy, Duncan Epping and I provided example formulas that could be used for calculating network bandwidth requirements for Virtual SAN Stretched Clusters during our “STO5333 – Building a Stretched Cluster with Virtual SAN” session in both San Francisco and Barcelona .

We also promised that documentation containing all of the necessary and required detailed information containing the “how, why, when, and where” along with the validated metrics from our engineering team would be published soon, Well here it is… for those looking to evaluate Virtual SAN Stretched Clusters and to also gain better understanding about the bandwidth sizing semantics for the solution.

The paper can no be download directly from the link below:

VMware Virtual SAN Stretched Cluster Bandwidth Sizing Guidance White Paper

BOOM!!! Get Some!! 😀

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVol) and other Storage and Availability technologies, as well as vSphere Integrated OpenStack (VIO), and Cloud-Native Applications (CNA) be sure to follow me on Twitter: @PunchingClouds

algorithms_small_logoDuring the past two VMworld conferences, I along with Christos Karamanolis (@XtosK), lead architect of Virtual SAN and CTO of Storage and Availability at VMware, along with other members of our team have spent a good amount of time covering all aspects of the Virtual SAN architecture. One of the key topics covered has been Virtual SAN’s caching algorithms. For the most part, the specifics about the Virtual SAN algorithms, their behavior, and functionality haven’t been publicaly available for general consumption.

Due to the continuous demand from customers and storage enthusiasts, we have created and published a white paper calledAn Overview of VMware Virtual SAN Caching Algorithms” The paper was developed to provide additional insights and information into the operations Virtual SAN and its caching algorithms. The paper explains the algorithms different behaviors as well the protocols utilized for the different architectures. The paper also explains how Virtual SAN’s capability to intelligently leverages flash, memory, and traditional magnetic disk and also details on how Virtual SAN combines the capacity, performance, and endurance of each class of storage are also provided.

The white paper can be downloaded directly from the following link – “An Overview of VMware Virtual SAN Caching Algorithms”.
For those of you who attended VMworld 2015 this year and had access to the online recordings, take a look at session STO5336 for the Virtual SAN Deep Dive session where we covered the algorithms as well.

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVol) and other Storage and Availability technologies, as well as vSphere Integrated OpenStack (VIO), and Cloud-Native Applications (CNA) be sure to follow me on Twitter: @PunchingClouds


A new Microsoft Exchange 2013 on Virtual SAN 6.1 is now available on the VMware Virtual SAN product Resource page. This new VSAN-Exch
reference architecture walks through the validation of Virtual SAN’s ability to support Microsoft Exchange 2013 designed to satisfy high IOPS mailbox configuration with Exchange Database Availability Groups (DAGs). The reference architecture is based on a resilient design that covers VMware vSphere clustering technology and Exchange DAG as well as data protection and recoverability design of Exchange Server 2013 with vSphere Data Protection and vSphere Site Recovery Manager.

Below is a list of topic and focus of the reference architecture:

  •   Illustrates Virtual SAN performance using Exchange Jetstress.
  •   Shows the benefits of minimal impact to the production environment for Exchange Server backup and restore in a consolidated Virtual SAN environment.
  •   Includes a disaster recovery (DR) solution using VMware vSphere ReplicationTM and Site Recovery Manager.
  •   Demonstrates storage performance scalability and resiliency of Exchange 2013 DAG in a virtualized VMware environment backed by Virtual SAN.
  •   Describes Virtual SAN best practice guidelines for preparing the vSphere platform for running Exchange Server 2013. Guidance is included for CPU, memory, storage, and networking configuration leveraging the existing VMware best practices for Exchange 2013.

Adding VMware Virtual SAN to the Exchange 2013 architecture aims at furthering the evolution by providing highly scalable, reliable, and high-performance storage using cost-effective hardware, specifically directly attached disks in ESXi hosts. Virtual SAN embodies a new storage management paradigm that automates or eliminates many of the complex management workflows that exist in traditional storage systems today. Virtual SAN enables IT administrators to easily deploy and administer Microsoft Exchange 2013 on VMware vSphere while still maintaining high availability and reducing costs using a shared infrastructure hosted on ESXi.

You can access and download the new reference architecture white paper directly from here -> Microsoft Exchange 2013 on Virtual SAN 6.1 Reference Architecture.

– Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVol) and other Storage and Availability technologies, as well as vSphere Integrated OpenStack (VIO), and Cloud-Native Applications (CNA) be sure to follow me on Twitter: @PunchingClouds

dell-readynode-logoI’ve been waiting and looking forward to this moment for some time now; the Dell FX2 server platform is now officially listed in the VMware Compatibility Guide. Customers are now able easily to choose from a list of available ready nodes one of the most powerful converge server platforms currently available today.

I have used the FX2 server platform for a number Virtual SAN projects and use cases ranging from VDI, Management Cluster, DMZ, Business Critical Application (Microsoft Exchange, Microsoft SQL, Oracle), and multi-site Stretched Cluster deployments and the platform has delivered the goods every single time.

The Dell’s FX2 server platform is an ideal solutions for VMware’s hyper-converged infrastructure (HCI) with VMware Virtual SAN. The FX2 value proposition complements with the principal values of Virtual SAN around simplicity, ease of management, scalability, and cost effectiveness.

The Dell FX2 Virtual SAN Ready Nodes are now available in the following models and configurations:


Read Full Article →


One of the primary goals of the Storage and Availability team at VMware is to validate business critical application running on Virtual SAN continuously. It is of the utmost importance for us to deliver the necessary information to customers, so they feel confident about the storage platform they are considering or plan to use to run their business critical applications. At the same time, we want to provide the necessary data points for them to understand how the platform will deliver the capacity, performance, and availability services that are demanded by their applications.

Below is a sample of the information that can be found on a performance study performed on SAP IQ, a mission critical application running on VMware Virtual SAN.

SAP IQ is an intuitive, cost-effective, and highly optimized RDBMS that is fast and efficient for extreme-scale data warehousing and big data analytics. SAP IQ is a distributed application with multiplex nodes, which may have different roles with different capabilities. This is unlike other database cluster architectures, which usually follow either a shared-everything or shared-nothing architecture. The multiplex server configuration can be described as an “asymmetrical cluster” One node is designated as the coordinator; the remaining nodes are query nodes and may be either Readers or Writers. In addition to its role of handling transaction management, the coordinator can also serve as a Reader or Writer in the multiplex.

Distributed Query Processing uses the available memory and CPU resources of all available nodes to process queries. Performance is therefore determined by the overall workload in the cluster as a whole at any given time. In a single run of a long-running query, the work distribution may change over the course of a query execution as the load balance changes across worker nodes.
The node at which a query is submitted becomes the leader node for that query, and the remaining nodes assume the worker role. Any node in the cluster can be the leader node; similarly, a worker node is any node that is capable of accepting distributed query processing work. Work is performed by threads running on both the leader and worker nodes, and intermediate results are transmitted between nodes by one of two mechanisms: through a shared disk space, or over an inter-node network. Read Full Article →