Horizon6+VSANIn today’s special webcast event VMware officially announced the release of VMware Horizon 6.0. This version is designed to meet the demands of today’s mobile workforce and optimized for the Software-defined Datacenter architectures and superior operating models.

The announcement was packed with a lot of new great new features and capabilities for the entire Horizon Suite of products, but one of my personal favorite announcements was around the support of Virtual SAN storage policies.

This new release delivers an unmatched level of integration with Virtual SAN by to leveraging all of the key benefits Virtual SAN has to offer:

  • Radically simple management and configuration
  • Storage policy base management framework,
  • Performance, capacity, and resilient foundation
  • Linear scalable capabilities (scale up or scale out).

By leveraging vSphere’s new policy driven control plane and the storage policy based management framework, Horizon 6.0 is able to guarantee performance and services levels to virtual desktops by leveraging VM Storage Policies defined for virtual desktop based on their storage capacity, performance and availability requirements.

Horizon 6.0 automatically deploys a set of VM storage policies for virtual desktops onto vCenter Server. The policies are automatically and individually applied per disk (Virtual SAN objects) and maintained thorough out the lifecycle of the virtual desktop. The policies and their respective performance, capacity and availability characteristic are listed below:

  • VM_HOME - Number of disk stripes per object 1, Number of Failures to tolerate 1. This corresponds to the default policy of Virtual SAN.
  • OS_Disk - Number of disk stripes per object 1, Number of Failures to tolerate 1. Again, this is the default policy.
  • REPLICA_DISK - Number of disk stripes per object 1, Number of Failures to tolerate 1, Flash Read Cache Reservation 10%. This policy dedicates some of the SSD or flash capacity to the replica disk, in order to provide greater caching for the expected level of reads that this disk will experience.
  • Persistent Disk - Number of disk stripes per object 1, Number of Failures to tolerate 1, object space reservation 100%.  This policy ensures that this type of disk is guaranteed all the space required.

The following video illustrates the new Horizon 6.0 integration with Virtual SAN policies:

The combination of Horizon 6.0 and Virtual SAN provides customers with the ability to deploy persistent and non-persistent virtual desktops without the need for a traditional SAN.

By combining the lower cost of server-based storage with the availability benefits of a shared Datastore, and having an additional punch from SSD-based performance acceleration, Virtual SAN yields major cost saving with the overall implementation of a VDI solution.

- Enjoy

For future updates, be sure to follow me on Twitter: @PunchingClouds

VSANA question that I’ve been asked about very often has been around the behavior and logic of the witness component In Virtual SAN. Apparently this is somewhat of a cloudy topic. So I wanted to take the opportunity and answer that question and for those looking for more details on the topic ahead of the official white paper where the context of this article is covered in greater depth. So be in the look out for that.

The behavior and logic I’m about to explain here is 100% transparent to the end user and there is nothing to be concerned with regards to the layout of the witness components. This behavior is managed and controlled by the system. This is intended to provide an understanding for the number of witness components you may see and why.

Virtual SAN objects are comprised of components that are distributed across hosts in vSphere cluster that is configured with Virtual SAN. These components are stored in distinctive combinations of disk groups within the Virtual SAN distributed datastore. Components are transparently assigned caching and buffering capacity from flash based devices, with its data “at rest” on the magnetic disks.

Witness components are part of every storage object. The Virtual SAN Witness components contain objects metadata and their purpose is to serve as tiebreakers whenever availability decisions have to be made in the Virtual SAN cluster in order to avoid split-brain behavior and satisfy quorum requirements.

Virtual SAN Witness components are defined and deployed in three different ways:

  • Primary Witness
  • Secondary Witness
  • Tiebreaker Witness

Primary Witnesses: Need at least (2 * FTT) + 1 nodes in a cluster to be able to tolerate FTT number of node / disk failures.  If after placing all the data components, we do not have the required number of nodes in the configuration, primary witnesses are on exclusive nodes until there are (2*FTT)+ 1 nodes in the configuration.

Secondary Witnesses: Secondary witnesses are created to make sure that every node has equal voting power towards quorum. This is important because every node failure should affect the quorum equally. Secondary witnesses are added so that every node gets equal number of component, this includes the nodes that only hold primary witnesses. So the total count of data component + witnesses on each node are equalized in this step.

Tiebreaker witness: If after adding primary and secondary witnesses we end up with even number of total components (data + witnesses) in the configuration then we add one tiebreaker witnesses to make the total component count odd.

Let me incorporate the definition and logic described above into two real world scenarios and also explain why the witness components were placed the way they did:

  • Scenario 1:  VM with a 511 GB VMDK with Failures to Tolerate 1

Note: Virtual SAN object namespace is limited to 255GB per object. Objects greater than 255GB are split evenly across hosts. This explains the behavior illustrated in both examples below with the a RAID 1 set configuration with multiple concatenated RAID 0 sets.

Example 1

There is only 1 witness deployed in this particular scenario, why?

In this particular scenario all of the RAID 0 stripes were placed on different nodes. Take a closer look host names.

Now why did that happened in that way and how does that relate to the witness types described above?

When the witness calculation is performed in this scenario, the witness component logic comes into play as listed below:

  • Primary witnesses: Data components are spread across 4 nodes (which is greater than 2*FTT+1). So we do not need primary witnesses.
  • Secondary witnesses: Since each node participating in the configuration has exactly one component, we do not need any secondary witnesses to equalize votes.
  • Tiebreaker witness: Since the total component count in the configuration is 4, we only need one tiebreaker witness.
  • Scenario 2: VM with a 515 GB VMDK with Failures to Tolerate 1

Scenario 2

In this scenario there were 3 witnesses deployed, why?

In this particular scenario some of the RAID 0 stripes were placed on the same nodes. Take a closer look host names. The configuration of components is layer out in the following configuration:

  • 2 components on node vsan-host-1.pml.local
  • 2 components on node vsan-host-4.pml.local
  • 1 component on node vsan-host-3.pml.local
  • 1 component on  node vsan-host-2.pml.local
When the witness calculation is performed in this scenario, the witness component logic comes into play as listed below:
  • Primary witnesses: Data components are spread across 4 nodes (which is greater than 2*FTT+1). So we do not need primary witnesses.
  • Secondary witnesses: Since two nodes have 2 votes each and 2 nodes have only one vote each, we need to add one vote (witness) on the following nodes:
    • vsan-host-3.pml.local
    • vsan-host-2.pml.local
  • Tiebreaker witness: After adding the two witnesses above, the total component count in the configuration is 8 (6 data + 2 witnesses) we need one tiebreaker witness and that is the third witness.
For the most part, people expect the witness count is depending on the failures to tolerate policies (0 to 3). The witness count is completely dependent on how the components and data get placed and are not really determined by a given policy.
Again, as I said in the very beginning of the article this behavior is 100% transparent to the end user and there is nothing to be concerned with since the behavior is managed and controlled by the system.
- Enjoy
For future updates, be sure to follow me on Twitter: @PunchingClouds

VSAN+VR+SRMFor the third article of the Virtual SAN interoperability series, I want to showcase the interoperability between Virtual SAN vSphere Replication and vCenter Site Recovery Manager. This demonstration presents one of the many possible ways in which customers can use of vSphere Replication and vCenter Site Recovery Manager with Virtual SAN.

In the demonstration below, I performed a fully automated planned migration of virtual machines hosted on traditional SAN infrastructures onto a Virtual SAN environment seamlessly. This example particularly shows how simple this type of operation can be achieved utilizing existing vSphere tools and technologies that possess integration capabilities with Virtual SAN.

This operation is extremely useful and efficient when considering migrating virtual machines onto a Virtual SAN cluster. There have been discussions about Virtual SAN being a solution for Greenfield environments only, and that is absolutely inaccurate. As demonstrated in the video below, this is one approach that can be used to migrate existing virtual machines onto Virtual SAN automatically without the risk of data loss and also fully orchestrated.

vSphere Replication provides the virtual machine replication capabilities and vCenter Site Recovery Manager provides the orchestration capabilities of the procedure. The operation can be performed solely with vSphere Replication, but then some portions of the procedure would have to be conducted manually.

While this demonstration is focused on a planned migration operation within a single site, the same example and capabilities are applicable to the following scenarios:

  • Planned migration across sites
  • Disaster Recovery

From the tools and solution perspective the difference between “planned migration” operation and a “disaster recovery” operation is a single click away which is determined in vCenter Site Recovery Manager.

Here is a list of some the key benefits vSphere Replication and vCenter Site Recovery Manager deliver to Virtual SAN:

  • Asynchronous replication – 15 minute RPO
  • VM-Centric based protection
  • Provide automated DR operation & orchestration
  • Automated failover – execution of user defined plans
  • Automated failback – reverser original recovery plan
  • Planned migration – ensure zero data loss
  • Point-in-Time Recovery – multiple recovery points
  • Non-disruptive test – automate test on isolated networks

 

- Enjoy

A special Thanks to Graham Daly VMware’s multimedia program manager for adding the nice voice over touch to my recording, @VMken, and @jhuntervmware from the TM Storage & Availability team for validating my demo recording.

For future updates, be sure to follow me on Twitter: @PunchingClouds

VSAN-vCACFor the second article of the Virtual SAN interoperability series, I showcase the interoperability between Virtual SAN and vCloud Automation Center. This demonstration presents one of the many ways in which vCloud Automation Center can be used to provision virtual machines onto a Virtual SAN infrastructure via a service catalog.

In this scenario, I have created and published three vCloud Automation Center blueprints to a service catalog. All blueprints are accessible to all users in a private cloud. Each blueprint was created based on virtual machine templates that are configured with a VM Storage Policy which was assigned at the vSphere level.

A VM Storage Policy is a vSphere construct that store storage capabilities in order to apply them onto virtual machines or different VMDKs. In this case the capabilities are based on capacity, availability, and performance which the offerings of Virtual SAN.

In the demonstration, the focus is around deploying a virtual machine with the highest level of availability. Virtual machine or VMDKs availability configurations are defined by the “Number of Failures to Tolerate” storage capability.

The service catalog contains 3 different virtual machine offerings each with a different “Number of Failures to Tolerate” policy as defined below:

  • Default Availability FTT=1
  • Medium Availability FTT=2
  • High Availability FTT=3

As a result of the deployment, you see that the virtual machines objects are distributed across four hosts in the cluster in order to satisfy the availability requirements. It is important to point out that all the configuration use for the demonstration exists within vCloud Automation Center. There was no customization used as part of this implementation. This is just one of the many ways how vCloud Automation Center can be used with Virtual SAN.

While vCloud Automation Center provides partial integration capabilities by default a lot more can be done with custom workflows and advanced configurations.

Key benefits of vCloud Automation Center provides Virtual SAN:

  • Centralized provisioning, governance, infrastructure management capabilities
  • Simple and self-service consumption capabilities
  • Entitlement compliance monitoring, and enforcement
  • leverage existing business processes and tools
  • Delegation control of resources

 

- Enjoy

For future updates, be sure to following me on Twitter: @PunchingClouds

VSANIn effort to continue to providing information about Virtual SAN and its capability via recoded demos I’ve created a new set of Virtual SAN walkthrough demos .
The walkthrough demos are available and accessible online, for everyone that is interested in learning about how Virtual SAN works, its capabilities and how does it interoperate with other VMware products and solutions. To access the Virtual SAN walkthrough demos use the link below:

For more walkthrough demos continue to check the site as I will be updating the walkthrough demo catalog frequently.

- Enjoy

For future updates, be sure to following me on Twitter: @PunchingClouds