Post Nutanix .Next Thoughts

nextpic.jpgNow that the dust has settled a bit since Nutanix’s .Next annual conference and I’ve had some time to reflect on my time in New Orleans, let’s burn a few minutes of eye time on pulling those thoughts out of my brain and put it in writing, after all that is what a blog is for right?  According to the age of my blog and the utter lack of number of blog posts I’ve done so far…. I either have an abundance of space in my head or a slow leak somewhere.  I’m going with the slow leak, you can decide for yourself… now onto why you are really here!

I won’t hit on everything announced, but let’s go with a few that stuck out to me..

Beam

Early this year I spent a few days taking an Amazon AWS course in Cambridge and I came to MANY conclusions during that class but one that kept coming up among the crowd of both customers and pre-sales techie types is how unknown their monthly AWS cloud costs will end up actually costing them. Until they actually get that bill, it’s somewhat of a guessing game as there is so many different individual fees, etc. that all add up.  I like to equate this to getting a hospital bill.  You know you need all these services, medicines, and every individual object touched down to the tissues you sneeze and ones you don’t remember are on your invoice but you don’t ever have a clue what the bottom-line is going to look like until you have it.  At least for AWS (and others soon!) there’s now Beam.  Beam shines a light (tada, name… Beam) into your cloud costs to help you manage them and provide governance around them across providers. This product is actually something you can subscribe to immediately and is also technically the first SaaS offering from Nutanix as they expand beyond HCI infrastructure.  I’m excited to see where this product will be going as more providers are added and it gets integrated into the Nutanix portfolio.  Now if only someone could do the same for my healthcare bills….  See Nutanix’s blog post about it and where I “borrowed” the below image.

beam.png

 

Flow

flow.png

Security is king these days.  It’s no longer just OK to protect your infrastructure at the edge and in a north-south fashion.  There is a trend happening where network security teams are now looking to protect data east-west, so from VM to VM, or application to application.  The advent of micro-segmentation partially stems from increased data threats due to malware.  Cordoning off VMs from one another will prevent malware from spreading inside a network VM to VM.  The problem until now was that SDS networking solutions have been very complex to stand up, maintain, and tend to be extremely professional services heavy.  Introducing flow…. It’s all built directly into the Nutanix software stack and just needs to be enabled. No more lengthy, time consuming engagements or additional infrastructure VMs, etc. need to be deployed.  Just license, enable, and start micro segmenting.  Nutanix’s story of simplicity continues even into software defined networking. Amazing.  Historically I’ve seen many customers never quite get to going down this route, mainly due to complexity of management, and yes of course costs, but when it’s made to be so easy to deploy, I can see adoption skyrocketing:  Official Nutanix blog post on Flow for reference.

X-Ray: Open Sourced

Lastly, Nutanix has made a move to open source it’s X-Ray tool. A couple months ago I got to get some hands on time with it for the first time and was really impressed at where it is today and look forward to using it more in the future.  X-Ray was designed by Nutanix as a tool to test HCI platforms against real-world scenarios. Initially intended for their own platform, they have been adding additional hypervisors, HCI hardware platforms, and additional testing scenarios since it’s release.  Although it was designed using standard testing tools, being open source now exposes it’s neutrality to the masses and another great side effect of this is that it opens up the tool for further development of both test plans and other HCI platforms.  I for one think this was a great idea, taking what could be just a pre-sales engagement tool and opening it up for anyone to use to validate either their existing infrastructure or even other gear that is in a POC/testing phase.  Traditional legacy testing tools just don’t cut it anymore in the world of HCI, so I’m glad to see some movement to modernize testing scenarios for real world application and am happy to see Nutanix leading that cause. More info on this can be found on Nutanix’s blog.

That’s all I got for now. Let’s hope this reinvigoration of my blog continues. Until next time!

You can find all of the linked blog posts (and others!) on Nutanix’s main blog site here: https://www.nutanix.com/blog/

New Year, New Gig… sorta

Last month marked my 6th year of working at Winslow Technology Group.  As the second SE on the WTG team, I’ve watched our little group of two grow into an engineering team of eight with guys doing both pre/post sales systems engineering, post-sales only, a CTO, and most recently a couple guys doing staff augmentation full-time at one of our key customers.  The growth has been tremendous for the company and I can see it continuing for the foreseeable future, but as we grow we’re starting to find that some structure is necessary.  It’s not only needed so that sales cycles run smoothly and our customers are happy, but also that each individual on the team is happy. I’ve noticed that we have been seemingly become very silo’d in what we do as we grow and start to move away from the small company “do everything” mindset.  Growing pains!  They are good, necessary, and signs of a healthy (I hope!) company.

I’ve been reading a lot online lately about what it means to be a growing and maturing engineer, having a work/life balance, and other things about how to not become stale in a role without challenging yourself and setting aside time to continue educating yourself to not only keep you sharp but also to keep doors open for career advancement.  It came at a perfect time because as of January 4, I will be taking on new management responsibilities  as the Director of Engineering for WTG. It’s an exciting opportunity that will no doubt stretch me but I’m looking forward to it!  Somehow I am still continuing to keep all of my existing pre-sales responsibilities… I must be a horrible negotiator!

Cheers to the New Year and the new adventures that lay ahead!

Dell’s Entry-Level Compellent: SCv2000 Series

This week Dell annouced a new Entry-Level model to the Compellent SC line of storage.  This is a much lower cost of entry option (sub $20K) making SC available to a much broader customer base.  The SCv2000 starts to further blur the lines between Dell’s other storage products in the EqualLogic and MD lines of storage, a move that could likely cause a shift in product offerings at some point. Though as I mentioned in my last blog post, Dell also released a new EqualLogic array with the PS6610 so nothing is happen just yet!  The SCv2000 is considered a “cousin” to the SC4020 and SC8000 arrays. The three products share the same foundational code base and are all managed using the Enterprise Manager software, but the SCv2000 cannot pair with the other two for replications or Live Volume. Dell did however provide an option to do Data Migration into an SC4020 or SC8000 by way of a one-time per volume data ingestion.

Let’s get to the details. Below is a list of most of the options and features avaiable with the exception of the “unavailable” section to help lay it all out there as to why it’s considered to be an “entry-level” array.

Without further ado…. the nitty gritty!

SCv2000 SeriesSCv2000-Lineup

  • 3 Base Models
    • SCv2000 is 2U with up to 12 3.5″ drives (7 minimum scale 1 at a time)
    • SCv2020 is 2U with 24 2.5″ drives (7 minimum scale 1 at a time)
    • SCv2080 is 5U with 84 2.5″ or 3.5″ drives (28 minimum scale 14 at a time)
  • Disk Expansion Shelves
    • Total drive max count on any config is 168 drives
    • 2U models can expand with the SC100 (2U 12 x 2.5″) or SC120 (2U 24 x 3.5″)
    • 5U model can be expanded with the SC180 (5U 84 x 3.5″ or 2.5″)
  • Front End Connectivity
    • 12GB SAS/1Gb iSCSI/10Gb iSCSI/8Gb FC/16Gb FC
    • 1Gb iSCSI is 4 ports per controller
    • 10Gb iSCSI is 2 ports per controller and is Base-T ONLY so can support 10Gb/1Gb
    • 12Gb SAS is 4 x HD Mini-SAS ports per controller
    • 8Gb FC is 4 ports per controller
    • 16Gb FC is 2 ports per controller
  • Software Features Unavailable on Entry Level SCv2000
    • No Data Progression between tiers of storage (only RAID tiering)
    • No FS8600 unified file system only Windows NAS
    • No replication to SC4020/SC8000 (only SCv2000 to SCv2000)
    • No compression
    • No Live Volume
    • No Perpetual Licensing (assumed this is to lower the investment)
  • Base Licensing
    • RAID Tiering (move R10 blocks to R5) – No Data Progression on SCv2000 arrays
    • Thin Provisioning
    • Data Migration
      • Can do a one-time replication of a volume to SC4020 or SC8000
  • Expansion Licensing
    • Flex port Option
      • Enables the (2) 10Gb Base-T MGMT/Repl ports to also do FE traffic
    • Replay/Snapshot Option (LDP)
      • Up to 2000 snapshots
    • Replay Manager
      • VMware
      • VSS for Exchange/SQL/Hyper-V
    • Replication Option (RDP)
      • Only SCv2000 to SCv2000
      • Asynchronous
      • Includes LDP
      • 125 source replications and 500 targets/destinations
  • Management and Support
    • Common Management with Enterprise Manager (All Dell SC products can be managed from one interface)
    • Integrated Dell Support Assist instead of Phone Home
    • SCv2000 can be customer installable
  • Hardware
    • 2U model has 12 or 24 disks in front, 1 or 2 controllers and 2 PSU in the back
    • 5U model has 2 drawers in front with 42 drives each, 1 or 2 controllers and 2 PSU in the back
    • Each model has 5 different controller options to match the Front End connectivity options
    • 7K/10K/15K and Write Intensive/Read Intensive/Mixed Use flash drives available
    • No 15K 3.5″ drives, only 2TB/4TB/6TB 7K 3.5″ or 1TB/2TB 7K 2.5″
    • Can mix and match drive types in the same array

Dell Expands Its Storage

Today Dell announced three new solutions in their storage portfolio.  The SCv2000 Series comes in as a new entry level array product in the Compellent Storage Center portfolio filling a gap that the previously released SC4020 could not achieve. There has been much speculation that “the end is nigh” for the EqualLogic line of products but Dell has shown that this is not quite yet to be true with the release of the PS6610 and their new version 8 PS Series software.  We also cannot not forget the SDS side of the house as Dell builds upon the “Blue Thunder” suite of solutions with their announcement of the new Microsoft Storage Spaces solution

 The Dell Storage SCv2000 (Compellent) array product line comes in three models:

  • SCv2000SCv2000 – Base 2U array with up to 12 3.5” drives
  • SCv2020 – Base 2U array with up to 24 2.5” drives
  • SCv2080 – Base 5U array with up to 84 2.5” or 3.5” drives

Any of these models have the ability to further expand with additional drive shelves for a maximum count of 168 drives with the SC1xx line of disk enclosures. Front end connectivity comes in many flavors (12Gb SAS/1Gb iSCSI/10Gb iSCSI/8Gb FC/16Gb FC).

I go into further detail in this blog post.

Dell Storage PS6610 Storage Array

The Dell Storage PS6610 (EqualLogic) array may look extremely familiar to you. It looks to be based on the same hardware that the SC8000 SC280 expansion shelf and the new SCv2080 (and SC180 disk enclosure) are utilizing.  The array will come in three different options providing options for high density as well as a hybrid flash and spinning disk option just like you find in other PS Series arrays. The PS6610 has up to 3.5X capacity and up to a 7X in increased performance than the previous generation and can scale out to up to 504 TB per array.

The new PS Array Software 8.0 includes support for the PS6610 dense hardware as well as compression that can save you up to 50% space against snapshots and replicas.  Also included in 8.0 is the support for VVol’s and other features like having default daily snapshots, enhanced space borrowing, and Smart Tag support.

Dell Storage with Microsoft Storage Spaces

Microsoft Storage Spaces now has tested and validated solutions on Dell hardware that further expands Dell’s SDS “Blue Thunder” portfolio.  A combination of Dell servers, Dell Storage MD JBODs, and the Microsoft Storage Spaces software rounds out the new Scale-Out File Server offering and will be available in five configurations.

Dell World 2014

DellWorldLogoThis week myself and 6 others from the WTG team (and a bunch of our customers) will be heading down to Dell World for Dell’s annual festivities.  We’re all really excited to see the full breadth of Dell’s portfolio showcased at the event and the opportunity to mix and mingle with both Dell Execs, SME’s, and especially our industry peers and customers.

If you are in the Austin area this evening at 7:30, there is a #vBBQ event going on this evening at Salt Lick BBQ, organized by Scott Hanson (@CiscoServerGeek) and partially sponsored by EMC and SpiceWorks, providing some BBQ fundage and beer… whatever you had planned for the evening, cancel it. This is where you need to be! Details here.

This will be my first time at Dell World but if it’s anything like Dell Enterprise Forum or Dell User Forums that I’ve attended in the past, we’re in for an amazing event with engaging keynotes, breakout sessions oozing technical content, and lots of social time to with other techies. Some social events that I know I will be attending outside of the Opening event are: The Dell & Samsung tweetup at 12 on Wednesday and The 3rd Annual #BetterTogether Tweetup also on Wednesday @ 7 PM.

If you would like to meetup during Dell World, please feel free to tweet me @BVTechie. I will be around Tuesday through Friday evening and don’t forget to follow #DellWorld on Twitter as well!  If you are not able to attend, they will be live streaming many of the sessions.

Dell PowerEdge Servers & MD Storage Upgraded

dell_blue_rgbLast week Dell introduced their 13th generation of PowerEdge servers. The 13G portfolio incorporates Intel’s latest release in processors codenamed “Haswell” aka Intel Xeon E5-2600 v3.  One thing I love about Dell’s releases in the PowerEdge line is that it goes across the portfolio. You’ll find these latest chips in the R630, R730, R30xd, M630 blades, and the T630 tower servers.

My favorite 13G server is the R730xd 1.8” 2U server where there are 18 x 1.8” + 8 x 3.5” in R730xdthe front and 2 x 2.5” in the back of the chassis.  This is a very unique configuration that when combined with either SanDisk DAS Cache, VMware VSAN, or even possibly PernixData (not sure if this has been tested yet, but I will volunteer here), would be great solutions that can provide high IOPS on top of dense storage with the integrated caching software provided by each of these vendors.

NFCAs always new processors are going to improve application performance and scalability that will eventually come to any server vendor, but what I like to also see is how a vendor implements improvements around the manageability of their product line. Dell doesn’t disappoint here. With the introduction of the 13G servers, they also introduced iDRAC QuickSync, an optional NFC enabled bezel that allows an administrator to just walk up to the rack and pull hardware logs, check firmware versions, and even set iDRAC IP settings with ease. The last one I think will be my favorite when deploying new solutions for customers. It’s nice that you can use the LCD panel to set the iDRAC IP settings but using my smartphone would be a LOT faster. There are also some great new features around automation. From deploying servers via profiles, to automated firmware updates, Dell is giving back time to the server administrator so they can go do other things.

MD1420Dell also has released updates in the direct-attached storage MD line. The MD1400 and MD1420 have been upgraded to 12Gb SAS and support the new PowerEdge RAID Controller 9 (PERC9) 12Gb SAS cards. The new enclosures can be configured to provide up to 384 TB capacity. They can also optionally be configured with self-encrypted drives (SEDs).

Overall I’m very impressed with these latest releases and I cannot wait to see what they’ve left “hiding under the sheets” for Dell World 2014. I will be attending for the first time this year so it should be great!

Dell User Forum: SC4020 & Nutanix Annoucements

Image

DUF14LogoToday in Miami at the Dell User Forum, Dell announced a new product in their storage line, the Dell Storage SC4020 array. This box falls in line with their standard enterprise storage solutions based on the Dell Compellent array. The software is the same but the new dual-controller array starts off as a 2U all-in-one chassis.  It is designed to fit in the $25K – $50K range.  I personally feel like this array has finally rounded out the Dell Storage portfolio. It will work well for companies who are looking for the expansive Compellent feature set and flexibility but at a much lower initial cost of investment. It has great potential for customers who are looking for an isolated solution to fit a need for projects like VDI, and it will also fit well with existing customers who are already utilizing Compellent in their core data center and want the same functionality in a remote office that will require a much smaller solution where the SC8000 may normally out-price itself. Dell Compellent SC4000 (Model SC4020) Yanging storage controller with bezel.The SC4020’s 2U dual-controller configuration has 24 x 2.5” disk bays on the front and the two controllers in the back making it look very similar to that of some of the products in the EqualLogic line of storage, but the main difference here is that it provides both iSCSI and fiber channel front end connectivity and SAS ports for expandability. The 2U unit can be easily be expanded with either SC200 or SC220 disk enclosures for a total of 120 drives (minus the 24 disks on the front) for both capacity and performance.  Just like the existing SC8000 controllers, it can also still be integrated with the FS8600 scalable NAS device to provide CIFS and NFS file level protocols on top of the existing FC and iSCSI SC4020 solution, all utilizing the same disk pool on the backend. Coming from being both a previous Compellent customer and now an engineer for a solution provider, I was at first skeptical of hearing about the introduction of another “mode.”  For years the story has always been how there are no separate models, only generations of controllers and enclosure hardware. Software features were not dependent on specific models and there were no port/protocol limitations based on the series you had bought so if you required a new protocol (1 GB to 10 GB iSCSI for example) you just had to swap out the card in the controller, not rip and replace the array.  I was glad to hear that if a customer buys into the SC4020 configuration, there will be an upgrade path to the SC8000 controllers. It would be disruptive only because of the internal drives of the SC4020, but at least there is an upgrade path.  The other big differentiator is the Storage Center software. From the auto-tiering between both RAID types and tiers of flash and spinning disk to Replication, Replays (snapshots), FastTrack, the features that we’ve grown to love about Compellent are still present in the SC4020. A blog post from Dell’s Travis Vigil on the SC4020. nutanix-logo-transparent-hirez300Also announced this week is new partnership with Nutanix to provide a new series of their converged appliance built on Dell servers and powered by the Nutanix software extending Dell’s SDS portfolio. This is another step for Dell into the hyper-converged solutions for the web-scale market. It will be a 2U appliance that can be stood up and ready to provision workloads in 30-60 minutes. The XC-Series will be supported through Dell and they are looking at integrating it with the Active System Manager software.  If you are unfamiliar with Nutanix, it’s an appliance that aggregates all of the compute and disk resources of each node in the cluster and presents a single Datastore for use by VMware ESXi, Hyper-V, or KVM. For more information, please read the full press releases. A blog post from Dell’s Alan Atkinson on SDS/Nutanix. Finally in the database and Big Data spheres, Dell announced new integrated systems for Oracle 12c Databases, database acceleration appliances utilizing Dell servers and technology from Fusion-IO (for Cassandra, MySQL, Sybase, Microsoft SQL, and MongoDB), and in-memory appliances for Cloudera Enterprise to much more quickly gather insights into Big Data (built on the Dell R720 & R920 server architectures). I’m sure there will be more to come in the next few days and I’m really excited for the momentum that has kick-started the User Forum. I’m looking forward to interacting more with other customers, partners, and the Dell techies that are normally in the background driving what we are seeing on stage and in our data centers today. Nutanix Press Release Dell Press Release

VMworld 2012 Highlights

My 3rd VMworld has finally come to a close minus the lingering sickness that has come to be called “The Crud.” It was a great year to attend, not only did the coference attendance hit an all-time high of around 21000, but my company had sent 2 employees for the first time (@J2Harrell) and I, AND I hit the sought after “Alumni” status this year. I was highly anticipating the exclusive status of VMworld Alumni and all its glorious gifts including the… umm.. tech gadget pocket?.. I will find a way to use it sometime soon!

If you missed the conference and would like to catch yourself up. Here are links to video recordings of Monday’s and Tuesday’s main keynote sessions as well as the Top 10 Most Popular Sessions that you will not want to miss.  All recorded sessions will be available online within 2 weeks but I’m not sure yet if you’ll need a VMworld login to see them all or not at this point.

 vSphere 5.1 Announced (Full New Features Link):

BVTechie’s Highlights:

  • Distributed Switches Enhancements:
    • Configuration backup & restore
    • Configuration rollback and recovery (you can’t mess up!)
  • Enhanced vMotion:
    • Allows vMotion with non-shared storage (vMotion & Storage vMotion mixed!)
  • vDR Replaced with vDP (see it’s own section)
  • Zero-Downtime to upgrade VMware Tools
    • After upgrading to the latest version, new updates/versions will not require reboots.
  • Web Client
    • The web client is now the future to admin vSphere, fat client is going away!

vSphere Data Protection

VMware totally rewrote the included backup product. Now has deduplication instead of just compression for backups. It is based on the Avamar deduplication technology but is its own rewritten software. It comes as an appliance that supports up to 2 TB of disk target and/or roughly (where they get this number I don’t know) 100 VMs.  I heard in a session that you can deploy up to 10 of these in your environment but haven’t found any hard facts saying this yet. There is no dedupe across/between nodes. It comes as a deployable appliance and has granular recovery but for what besides whole VMs and individual files, I haven’t figured out yet.  It is yet to be seen what sort of impact this will have on other virtualization backup providers like Veeam among others, but unless there is some sort of huge push from VMware, it may go fairly unnoticeably utilized. We will have to wait and see!

vSphere Licensing Revisited (Highlights again!):

  • Official Licensing White Paper
  • Say goodbye to vRAM Entitlement Licensing
  • Say goodbye to cores per processor licensing (used to be 6 cores for Standard & Ent., 12 cores  for Adv. & Ent. Plus)
  • Licensing still has vCPU Entitlement which is the limited number of vCPUs per VM based on licensed tier
  • Free ESXi no longer has RAM limits as well (and there was rejoicing among the test labs and do it yourself-ers)
  • VMware View bundles are now “vSphere Desktop”
  • vCloud Suites were announced with an Official Upgrade Path Promotion              good until 12/15/2012
  • More advanced features have been pushed down to lower tiers of licensing. Review Here

VDI/Cloud Computing

                During Tuesday’s Keynote session, they showed off a cool demo of Wanova Mirage, a centralized endpoint management and recovery solution “from the cloud” for end user devices.  They had a user who kept switching between Windows PC, VDI on a tablet, and Apple PC. The Mirage product will let you switch between devices because it’s both managing the Microsoft Windows image as well as doing backups of it so it can be either deployed to hardware or hosted in a VDI infrastructure.  I was impressed with its flexibility

Oracle Discusses Licensing:

Oracle does some of its licensing based on total CPUs that might run your Oracle servers. If you are running Oracle on VMware on a single host or a couple hosts in a cluster, you would have to license ALL of the CPUs in the cluster, which was ridiculous. This past week, the Director of Cloud Business Development EMEA for Oracle basically came out on video and said you can limit your licensing costs by utilizing features within VMware to limit the hosts they can vMotion to within a cluster. Here’s a link to Oracle Storage Guys’ Blog with the video. Oh and with all that extra licensing dollars they’ve been raking in, Oracle had 3-4 Oracle VM “taxi’s” stalking the grounds of the Moscone Center, but that read to me more like a “hey guys… we do VMs too… see, over here!”

VMware Community:

I must also put in a note on how appreciative I am of the VMware/virtualization community. I was able to meet up with some old friends throughout the conference as well as add some new faces to names/Twitter handles that I’ve been interacting with over the past couple years that I’ve been involved in the world of social media. The Hang Out space was invaluable and accommodating to all and not just for social activity. There were also times for education & interviews in the #vBrownBag sessions as well as The SiliconANGLE Network’s The Cube – VMworld 2012 broadcasts, both of which anchored the room.

Dell Tech Center User Group:

We went to an afterhours User Group hosted by the Dell Tech Center guys. One of the sessions that hit home for us as a Dell Partner was when Jason Boche gave a demo of Dell Compellent’s recently released VASA support. CITV 1.0 – “Compellent Integration Tools for VMware.”  It’s an appliance that is deployed by OVF and it integrates with Storage Center’s Enterprise Manager. It runs on CentOS with 1 vCPU & 2 GB’s RAM.  Basically this exposes what Tiers of storage are assigned to your Datastore volumes. That way you can utilize VMware’s Storage Profiles. When  you create a VM, you can say that it needs to be in a “Gold” Datastore, that may have properties like, Using All Tiers of Storage, and is Replicated. Or you can use “Bronze” which you can associate to only 7K disks and not replicated.  You choose a custom name and define which Storage Center features it requires to utilize.  So now your VM Guy doesn’t necessarily need to be a Storage Guy or your Storage Guy a VM Guy. There will be more integration with VASA, but this is a great initial starting point.  If you are a Compellent customer, you can download the CITV OVF and it’s User Guide from the Knowledge Center website.

 All work and no play?..nahh!

During the trip we were able to make it out to many events including VMunderground, CommVault’s party, the Dell Wyse party, and of course the VMworld Party featuring Jon Bon Jovi. Although I’m not a big fan, I found this blurry cam picture I took extremely funny.  How does one know you’re at a tech conference party? Check out all those smart phone’s!

Finally, I cannot finish my VMworld post without mentioning food. Even though those VMworld 2012 blankets and daily boxed lunches were to die for, I had to venture my way out to visit the @BaconBaconSF food truck. It was only a 10 minute walk away! I got myself a Mexican Coca Cola, a bacon bouquet,  and a bacon fried chicken sandwich. Ohhh yeahhh… my arteries are still recovering. Maybe that’s partly why I have “The Crud”.  Look at that, we’ve come full circle. I should probably end the post.

Hopefully see you next year!

Dell Compellent Live Volume

 

Recently I was tasked to take 15 minutes and discuss a not so well known feature of Dell’s Compellent storage array at my company’s 8th annual Dell Storage Users’ Group.  If you have grown to love the Compellent product, you are already full aware of its ability to very granularly automatically tier data at the block level (using 2 MB pages by default, but configurable for larger and smaller pages) between disk speeds using their Data Progression technology within a Storage Center.  Live Volume is the Storage Center’s software-based feature that can non-disruptively migrate data between Storage Centers allowing your servers to stay online during the whole process.  This is not a disaster recovery solution like VMware’s SRM where one site is a smoking hole in the ground. It’s a solution that provides pro-active movement of volumes between systems to avoid a disaster. There are also a few other use cases which I will go into later.

The technology in some scenarios uses OS level MPIO and in others it will allow one Storage Center to act as a proxy for the data and will transmit reads/writes between the two Storage Centers via the configured replication link.  Replication can be setup to utilize either iSCSI or Fiber Channel technology.  Live Volume basically sits in between both Storage Centers and maintains the relationship between them. One array is configured to be the primary and the other the secondary. If an IO request hits the secondary, then it passes that along to the primary where the read/write occurs and if its write data, then that data is then replicated back to the secondary array.

There are two ways to configure the setup: single-site MPIO and multi-site MPIO. In a single-site setup, the server is configured to attach to both Storage Centers so it can see the front end IO ports of both arrays and MPIO is typically configured as Fixed in VMware or Round Robin with Subset for Windows. This setup is typically used in a campus environment where FC switches are in a fabric or you have a low latency iSCSI connection.

In a multi-site setup, the server in Site A sees only the Storage Center in Site A and the server in Site B sees only the Storage Center in Site B and MPIO is typically configured as Round Robin. This setup is where the IO gets passed from the local Storage Center to the primary Storage Center over the replication link since the server has no knowledge about the SAN on the other side.

Use Cases

There are many use cases that an admin can utilize Live Volume in their environment. SAN to SAN replication is good, but being able to on the fly non-disruptively migrate server volumes from one SAN to another is not only much cooler but it allows you to maintain your SLA’s and spend more time with friends and family.

Zero Downtime Maintenance:

I’ve spent many evenings and weekends helping our customers do storage firmware and hardware upgrades off hours and during maintenance windows not because the hardware they’ve put in place is not redundant, but because the risk is lower if they play it extremely safe and bring servers down for the work. Live Volume eliminates those off-hour work times because you can just move your volumes from one SAN to the other while keeping your servers and applications online. You can then do your maintenance work and when you are done just move it back!

Migrating Volumes/Servers:

For this one I’ll use an example of how I assisted a customer of mine recently to do a data center move from one building to another a few blocks away, online, no downtime, and happy campers.  The customer had a high speed 1 Gbps link between their existing building and the building they were moving their company to over a period of a few months. They had this initially setup so they could over time migrate the offices of theirs users a department at a time while still maintaining connectivity to all resources including VOIP phones, internet, etc.  Since they had stretched their network between the two buildings, this made things extremely easy to implement Live Volume. They put a Storage Center in each site and connected them to each other via iSCSI for Replication and Live Volume traffic.  They are very heavily VMware virtualized so they were able to split their cluster between the two sites. They setup Live Volume for each VMware Datastore and then vMotion’d servers from a host located in Site A to a host in Site B. While the servers were running in Site B, the Storage Center in Site B would proxy the data and passes it along to the Storage Center in Site B. Then they just had to go into the Enterprise Manager user interface and make Site B’s Storage Center the primary for the servers in that Datastore and the migration was complete… zero downtime, non-disruptive, complete data center migration!

Disaster Avoidance:

Let’s say you live on the US east coast and hurricane ArrayDrowner is heading straight for your primary data center.. with Live Volume you can just move your volumes out of harm’s way to your hot DR site and breathe a sigh of relief.

Load Balancing Between Storage Centers:

As you look to scale out with Compellent, there is no limit on the number of active Live Volumes so you can essentially have an unlimited number of arrays that can pairs of arrays can share volumes between.  Server IO activity in environments can be very dynamic so having the ability to even out IO by balancing out extremely chatty volumes across Storage Centers can be very useful.

I took some time to play with this feature in my lab environment and wanted to share with you a screenshot of what the IO looks like within Storage Center. Our lab consists of VMware ESXi 5.x hosts and in the case of this particular Datastore volume, I turned on IOMeter to generate a decent amount of write IO to help create the screenshots.  On the left is the volume on Site A and the right Site B. You can see that at around 4:05 I switched the primary site to Site B and then back to Site A.  While running on Site A, there was no IO happening at Site B (except the replication of the data but that isn’t shown at the volume level here). When I switched the primary to be Site B, the IO load initially hits Site A but then is redirected to Site B. You’ll notice that at Site A the IO/sec and KB/sec drops a bit. This is because the two site’s disk configurations are not the same. Site A in my case has almost 2X the IOPS available than Site B so that is why you see the IO performance of Site B when it is the primary. Live Volume Screenshot

Live Volume is definitely an under used feature available for the Dell Compellent storage array. It is built into the Storage Center OS and is not an add on appliance, just licensing… and since you will ask, it’s licensed just like all of the other licensable features of Compellent.  I hope this post has been educational! Here are a couple valuable links I’ve dug up on Live Volume as well. Enjoy!

Dell’s Compellent Live Volume Site

Dell TechCenter’s Live Volume Demo

 

New Adventures!

There are many things I come across scrounging the internet and other things I’ve stumbled into while breaking playing in my lab.This is my attempt at collecting such things for your use or amusement. Let’s see how it goes! (here’s to hoping at least 1 more post, haha!)

Brian Vienneau – @bvtechie