Hyperconverged Secondary Storage VMworld 2017 Sessions

Oh yeah ladies and gentleman, It’s time to get down!!! and #GetYourNerdOn in Las Vegas. VMworld 2017 is right around the corner. There are so many things to be excited about from all aspects of the industry. It will be great to see and catch up with friends and share ideas and learn from everyone attending the event.

For the past 6 years while at VMware I’ve deliver some pretty ground braking solutions and demonstrations which have been showcased during the executive keynotes. This year I have something very special and impactful that I will be showcasing and trust me when I say you do not want to miss it.

I will be presenting a couple sessions which I think that anyone that is interested on the hottest storage industry topic today “Hyperconverged Secondary Storage” should not miss. The sessions are filling up fast to make sure you sign up ASAP to reserve your spot:

Session ID: LHC3390BUS – Enable data mobility from on-premises to the cloud with Cohesity DataPlatform
Monday, Aug 28, 5:30PM – 6:30PM – Register Here
Enterprises are looking to leverage the public cloud to extend their private data centers for application mobility and data storage. But moving applications and managing data in the cloud is hard. Cohesity makes cloud easy! With Cohesity, enterprises can consolidate all their secondary data on one web-scale platform on-premises. Cohesity enables enterprises to seamlessly integrate with all the leading public clouds (Microsoft Azure, AWS, Google Cloud) and extend their data to the public cloud for archival, tiering and replication. Cohesity manages the data in the public cloud to support disaster recovery, provisioning of test/dev instances, and analytics. Attend this session to learn about how Cohesity’s cloud integration provides you with a hybrid cloud data protection and mobility strategy that spans on-premises and the cloud.

Rawlinson Rivera, Chief Technology Officer, Global Field, Cohesity
Ben Price, Director, IT, UCSB

Session ID: PBO2073BU – Enterprise Protection and Reliability for VMware Cloud Foundation with Cohesity
Tuesday, Aug 29, 2:30PM – 3:30PM – Register Here
VMware Cloud Foundation unifies the most powerful software-defined data center platform in the enterprise that spans a common control plane for IT across private, public, and hybrid clouds. The infrastructure stack is composed of several virtual appliances that work together to provide the required services. To keep pace with escalating business demands for service availability, security, and recoverability, enterprise organizations need modern solutions capable of supporting service-oriented IT models that can provide protection and fast recovery for the VMware Cloud Foundation that can be leveraged in private, public, and hybrid clouds. While failures are a given in modern network infrastructures, lengthy outages are conditions that most modern enterprises running cannot tolerate in today’s economies. I will demonstrate something here that has never been done in an enterprise data center. You don’t want to miss it!!!

Alberto Farronato, Sr. Director Product Marketing, VMware
Rawlinson Rivera, Chief Technology Officer, Global Field, Cohesity

Session ID: VIRT1630BU – Wrangling and Taming Your Database’s Storage, Availability, and Disaster Recovery Monsters
Thursday, Aug 31, 10:30AM – 11:30AM – Register Here
Storage infrastructures can often be the most complex and confusing challenges faced by data professionals and architects in today’s enterprises. Every organization strives to achieve the right balance among the multitudes of necessary requirements to ensure optimal performance, security, reliability, availability, and (yes) protection of important data. Although failures are a given in modern network infrastructures, frequent and lengthy data/database performance degradation and outages are conditions that most modern enterprises cannot tolerate in today’s economies. This session provides a comprehensive discussion of the ways data professionals and architects can design a VMware vSphere infrastructure to provide the most resilient, risk-averse, and optimal designs for storage infrastructures and mission-critical databases.

Deji Akomolafe, Staff Solutions Architect, VMware
Rawlinson Rivera, Chief Technology Officer, Global Field, Cohesity

I hope to see everyone at VMworld and defiantly during the breakout session. You don’t want to miss my big surprise.

– Enjoy

For future updates about Cohesity, Hyperconverged Secondary Storage, Cloud Computing, Networking, VMware vSAN, vSphere Virtual Volumes (VVol), vSphere Integrated OpenStack (VIO), and Cloud-Native Applications (CNA), and anything in our wonderful world of technology be sure to follow me on Twitter: @PunchingClouds.

Cohesity DataProtect 5.0 Multi-Hypervisor Support: Microsoft Hyper-V

This is an exciting week for everyone at Cohesity, we have officially announced our Orion 5.0 release which is filled with new features, capabilities as well as expanded support for new hardware, applications and virtualization platforms including Microsoft’s Hyper-V. It is gratifying to know that our sophisticated and modern architecture and the capabilities of our end-to-end data protection and recovery application that is fully converged on top of the Cohesity DataPlatform will be available to our current and future customers using Microsoft Hyper-V as their virtualization platform.

As the adoption of Microsoft’s Hyper-V increase in the enterprise, organizations are faced with the challenge of identifying a modern solution that can protect and recover their critical business information and applications timely and efficiently. Today’s business requirements demand shorter recovery points and faster recovery times to accommodate their growing business needs.

Cohesity has been satisfying the business requirements for shorter recovery points and faster recovery times needs for customers using VMware vSphere as their virtualization platform. Now we can deliver the same value for Hyper-V customers and their virtual infrastructures. Cohesity DataProtect will now consolidate end-to-end data protection and recovery infrastructure – including target storage, backup, replication, disaster recovery, and cloud tiering for Hyper-V and eliminate data protection and recovery silos by converging all backup infrastructure components on a single unified scale-out platform.

Cohesity’s implementation and support for Microsoft’s virtualization platform Hyper-V is tightly developed by leveraging Microsoft’s and Cohesity’s technologies to provide customers the same simplified management, efficiency, values we have been providing for VMware vSphere customers. Cohesity’s provides integration and support for two different versions of Microsoft’s virtualization platform Hyper-V 2012 R2 and Hyper-V 2016. Let me highlight some of the points of integration and the specifics between both versions.

For both supported version of Hyper-V, we use Microsoft native PowerShell and WMI APIs to manage communications and subsystem interactions. PowerShell is used when interacting with Microsoft System Center Virtual Machine Manager (SCVMM), and WMI is used for interactions with Hyper-V hosts. For modern and intelligent space efficiency features and capabilities, we combine the use of both Microsoft’s and Cohesity’s native technologies to deliver optimal data protection and recovery benefits for Hyper-V:

  • Volume Shadow Copy (VSS)
  • Resilient Change Tracking (RCT)
  • Cohesity Change Block Tracking (CBT)
  • Cohesity Ephemeral Dynamic Helper Agent

Because Cohesity supports two different versions of Hyper-V the implementation and use of technologies to interact and manage the necessary subsystems is slightly different between the two versions from an implementation standpoint.

Hyper-V 2012 R2

With Hyper-V 2012 R2 – WMI, VSS and Cohesity Change Tracking are utilized:

  • WMI APIs are used to discover the VM properties. Cohesity CBT driver tracks the changes within the virtual disk files (VHD and VHDX).
  • Cohesity Ephemeral Dynamic Helper Agent interacts with VSS to trigger the VM snapshots with the changes where only the change areas will be backed up.
  • The data captured is then transferred through Cohesity’s secure layer from the primary storage system onto the Cohesity DataPlatform where the virtual disk files (VHD and VHDX) will be kept fully hydrated.

Hyper-V 2016

With Hyper-V 2016 – WMI and RCT (Resilient Change Tracking) are utilized:

  • Cohesity uses the WMI APIs to trigger and manage the snapshot creation and deletion and integrates with Microsoft’s RCT to backup just changed blocks.
  • The data is transferred through Cohesity’s secure layer from the primary storage system onto Cohesity DataPlatform, and the virtual disk files will be kept fully hydrated.
  • The integration is a bit more elegant because the use of Microsoft’s Resilient Change Tracking eliminates the use of additional components and simplifies the entire process for identifying changes within the disks.

As an extra capability to customers, once the VMs are stored on our DataPlatform, customers can then use our native cloud integration services and capabilities with Microsoft Azure for archive, DR, Test/Dev, analytics, and other potential use cases.

As illustrated, Cohesity’s implementation and support of Hyper-V is performed at a subsystem level without having to rely on installation agents in the guest operating system. Also, Cohesity’s Ephemeral Dynamic Helper Agent for SCVMM and Hyper-V hosts is fully managed by the Cohesity Cluster, and they are automatically upgraded when needed with new software revisions.

– Enjoy

For future updates about Cohesity, Hyperconverged Secondary Storage, Cloud Computing, Networking, VMware vSAN, vSphere Virtual Volumes (VVol), vSphere Integrated OpenStack (VIO), and Cloud-Native Applications (CNA), and anything in our wonderful world of technology be sure to follow me on Twitter: @PunchingClouds.

Cohesity SpanFS: The Difference Maker in The Enterprise and Secondary Storage Architectures

With the Orion 5.0 release, Cohesity announced the introduction of SpanFS, a new file system uniquely designed to consolidate and manage all secondary storage at scale. SpanFS and its architecture are the core of the Cohesity DataPlatform that enables enterprises to unify the control of their secondary data with web-scale capabilities.

The emphasis on enterprise storage architectures typically focuses on providing specialized capabilities and scalability that are on dependent proprietary hardware capabilities  vendors Scalability and space efficiency features such as compression, deduplication, and snapshots for resiliency standardized file interfaces such as NFS, SMB. Cloud storage architectures are developed by hyperscale companies like Google and Amazon focus on delivering scale-out software-defined solutions that run on commodity x86 hardware with robust and resiliency capabilities to support hardware failures. But they tend to rely on proprietary protocols and APIs for data access.
Today’s enterprise organizations are in desperate need for the best of both storage architectures. Enterprise organizations are looking to move onto software-defined, web-scale solutions that run on commodity x86 hardware, just like cloud storage. The Web-scale capabilities provide multiple advantages such as ‘pay-as-you-grow’ consumption, always-on availability, non-disruptive upgrades (instead of forklift upgrades), simpler management, and lower costs.

Enterprise storage solutions are traditionally deployed into segregated management silos because of different use case and requirements. Typically, purpose-built file systems are introduced which are dependent on vendor specific proprietary features.

For example, purpose-built backup appliances (PBBA) provide in-line variable-length deduplication to maximize space efficiency, but at the expense of random IO performance. Test/dev filers, such as NetApp, provide much better random IO performance and great snapshots, but can’t afford the performance overhead of inline deduplication.

To effectively consolidate secondary storage silos, enterprises need a file system which is simultaneously able to handle the requirements of multiple use cases. It must provide standard NFS, SMB and S3 interfaces, robust IO performance for both sequential and random IO, inline variable length deduplication, and scalable snapshots. And it must provide native integration with the public cloud to support a multicloud data fabric, enabling enterprises to send data to the cloud for archival or more advanced use cases like disaster recovery, test/dev, and analytics. All of this must be done on a web-scale architecture to manage the ever-increasing volumes of data effectively.

SpanFS was specifically designed to manage all secondary data, including backups, files, objects, test/dev, and analytics data, on a web-scale platform that spans from the edge to the cloud.  And overcome the logical and physical constructs limitations of today’s enterprise storage and cloud storage architectures. SpanFS is the combination of the best of both enterprise and cloud storage architectures simultaneously. And it’s the only file system in the industry that simultaneously provides NFS, SMB and S3 interfaces, global deduplication, and unlimited snaps and clones, on a web-scale platform.

SpanFS Architecture

SpanFS is an entirely new file system designed for secondary storage consolidation.

Access Layer – SpanFS exposes industry-standard, globally distributed NFS, SMB, and S3 interfaces and our built-in DataProtect application. All volumes or object buckets can be configured simultaneously on a single Cohesity cluster. The volumes are completely distributed with no single choke point. Each of these volumes benefits from all the unique SpanFS capabilities such as global deduplication, encryption, replication, unlimited snapshots, and file/object level indexing and search.

IO Engine – manages IO operations for all the data written to or read from the system.  It detects random vs. sequential IO profiles, splits the data into chunks, performs deduplication, and directs the data to the most appropriate storage tier (SSD, HDD, cloud storage) based on the IO profile. To keep track and manage the data sitting across nodes, Cohesity also had to build an entirely new metadata store.

Metadata Store – incorporates a consistent, distributed NoSQL store for fast IO operations at scale, SnapTree provides a distributed metadata structure based on B+ tree concepts. SnapTree is unique in its ability to support unlimited, frequent snapshots with no performance degradation. SpanFS has QoS controls built into all layers of the stack to support workload and tenant-based QoS, that can replicate, archive and tier data to another Cohesity cluster or the cloud.

Data Store – is responsible for storing data on HDD, SSD, and cloud storage. The data is spread out across the nodes in the cluster to maximize throughput and performance and is protected either with multi-node replication or with erasure coding. Sequential IOs may go straight to HDDs or to SSDs based on QoS policies. Random IOs are directed to a distributed data journal that resides on SSDs. As the data becomes colder, the data store can tier the data down from SSD to HDD. And hot data can be up-tiered to SSD.

Consistent NoSQL Store – The metadata store uses a distributed NoSQL store that stores the metadata on the SSD tier. This is optimized for fast IO operations, and provides data resiliency across nodes, and is continually balanced across all the nodes.
However, the key-value store by itself provides only ‘eventual consistency.’ To achieve strict consistency, the NoSQL store is complemented with Paxos algorithms.

With Paxos, the NoSQL store offers strict and consistent access to the value associated with each key.

QoS – Quality of Service is designed into every component of the system. As data is processed by the IO Engine, Metadata Store, or Data Store, each operation is prioritized based on QoS. High priority requests are moved ahead in subsystem queues and are given priority placement on the SSD tier.

Replication and Cloud – SpanFS can replicate data to another Cohesity cluster for disaster recovery, and archive data to 3rd party storage like tape libraries, NFS volumes, and S3 storage. SpanFS has also been designed to interoperate seamlessly with all the leading public clouds (AWS, Microsoft Azure, Google Cloud). SpanFS makes it simple to use the cloud in three different ways:

  • CloudArchive enables long-term archival to the cloud, providing a more manageable alternative to tape.
  • CloudTier supports data bursting to the cloud. Cold chunks of data are automatically stored in the cloud and can be tiered back to the Cohesity cluster once they become hot.
  • CloudReplicate provides replication to a Cohesity Cloud Edition cluster running in the cloud. The Cohesity cluster in the cloud manages the data to provide instant access for disaster recovery, test/dev, and analytics use cases.

Cohesity designed SpanFS, as a web-scale, distributed file system that provides unlimited scale across any number of industry-standard x86 nodes. SpanFS manages data across private data centers, and public clouds span media tiers and cover all secondary storage use cases including data protection, file and object storage, cloud integration, test/dev, and analytics.

– Enjoy

For future updates about Cohesity, Primary and Secondary Storage, Cloud Computing, Networking, Cloud-Native Applications (CNA), and anything in our wonderful world of technology, be sure to follow me on Twitter: @PunchingClouds.

Cohesity Orion 5.0: The Next Level of Hyperconverged Secondary Storage

Today we are announcing the release of Cohesity Orion 5.0, the latest version of our Hyperconverged secondary storage platform. This new release is packed with new features, improvements, and capabilities to all layers of the platform. Orion empowers enterprise organizations with a modern data platform with features to enable them to break away from the inefficient and fragmented silos in the data center and transform their secondary storage infrastructures into modern, scalable, and efficient environments. With Cohesity Orion 5.0 enterprise organization can consolidate traditional storage silos and get away from the multitude of storage products that were built based on outdated technologies and architectures. Centralize enterprise data protection, file services, object storage, and cloud gateways onto a single web-scale platform with best-in-class security, and space efficiency features.

Data Protection and Instant Recovery for Any Platform – Simplify management with a single UI and policy-based automation. Support for all the leading hypervisors with automated data protection for Microsoft Hyper-V 2012R2 (with agentless CBT), Hyper-V 2016 (using the new RCT change tracking), Nutanix AHV, and Linux KVM. We also protect any NAS storage, including snapshot-based data protection for Pure Storage FlashBlade, NetApp, and Dell EMC Isilon. Orion provides high-performance NAS backups with parallel tracking of changed data and multi-stream data transfers. Accelerate your recovery points and recovery times while cutting data protection costs by 50%. Integrate with all the leading public clouds for archival, tiering and replication. 

Advanced and Unlimited Object and File Services with Global Search  –  Provide globally distributed access to all storage abstraction and views on the platform. Offer simultaneous multiprotocol access via NFS, SBM, and S3 to all data that is stored on the platform. Space efficiency features like deduplication are globally applied. Orion provides the industry’s only globally deduplicated S3-compatible object storage. Indexing capabilities for all file and object metadata and global search across an entire cluster.

Multicloud Accessibility – Enables organizations to deploy a Cohesity cluster in any public cloud. Allowing them to replicate data to and from the cloud, manage information in the cloud, and instantly provision applications for disaster recovery, test/dev, and analytics. Orion enables organizations to recover an entire data center in the public cloud, near-instantaneously and to any point-in-time enabling Cloud disaster recovery at scale. DataPlatform Cloud Edition (CE) is now Generally Available for Microsoft Azure and available in Azure Marketplace. DataPlatform CE is also in limited availability on Amazon Web Services.

New Hyperconverged Storage Nodes – In addition to the C2000 series, a new C3000 dense storage node has been added to our list of certified appliances. Orion can now be deployed on a new C3000 dense storage node. Each node provides up to 183TB of raw capacity in a 2U form factor, providing almost 2X the storage density compared to C2000. The C3000 is optimized for large files and objects. Each Cohesity cluster can combine C2000 and C3000 nodes, and Orion provides intelligent data placement across node types based on IO profile and QoS.

Orion 5.0 is an incredible release and a milestone for us. We are just scratching the surface on the way to deliver our vision for hyperconverged secondary storage. Stay tuned there is more to come. See you all at VMworld 2017 in Las Vegas and Barcelona.

– Enjoy

For future updates about Cohesity, Primary and Secondary Storage, Cloud Computing, Networking, Cloud-Native Applications (CNA), and anything in our wonderful world of technology, be sure to follow me on Twitter: @PunchingClouds.

Page 1 of 6012345...102030...Last »