By Jon Toigo
A couple of weeks ago, Hewlett Packard announced a product that took me back to my mainframe days: a $50,000 heat exchanger for use with servers and storage equipment. Like the water chiller units we once used with big iron CPUs, the exchanger attaches to an equipment rack where it sucks out some of the BTUs generated by all that gear you decided to install at the end of 2005 when, if vendor reports are accurate, there was a last minute uptick in spending by companies pursuing strategies generically labeled as "consolidation."
From where I'm sitting, this raises yet another question about the value of consolidation strategies -- especially when they are predicated less on a meaningful and thorough analysis of return on investment, than on a "go-with-the-flow" mentality. Consolidation is, by no means, an approach that should be embraced by every company. In fact, consolidation might be the road to ruination for some firms.
I know what you are thinking. This flies in the face of everything you have been hearing from your vendors. They tell you that consolidation is a good thing in the "universal truth" sense of the expression. Consolidation is something everyone should be doing. It's happening in servers (the blade server phenomenon), so why not storage, too?
This mantra started (and continues today) with the efforts of three-letter acronym vendors to sell bigger, more capacious storage arrays. Big arrays, we were told, would enable companies to consolidate (read: centralize) more of their data on fewer boxes, delivering economies of scale and other improvements in storage management. The choke points created by such a strategy -- manifesting themselves as slower application performance and angry users sick of the World Wide Wait experience when requesting their latest PowerPoint opus -- have seen many companies using less than 40% of their Really Big Iron arrays in an effort to buy back performance.
The consolidation mantra extended into the "SAN" (FC fabrics, actually) craze of the 1990s. Taking all of your arrays out of departments and placing them into a centralized "SAN" was supposed to simplify capacity management through storage pooling, while exposing equipment to better management discipline and ensuring that backups got done -- all with fewer people. If recent surveys are to be believed, SANs are now the number three cause of downtime in shops that have them, just behind natural disasters and WAN outages. The dearth of SAN management tools and the ever-present interoperability difficulties between SAN vendor wares have basically tabled discussions of the SAN value proposition.
Next was the "tiered storage" push, which started in late 2004. The industry told you that you should take a metaphor from the mainframe world and apply it to distributed storage infrastructure, establishing "classes" of storage -- based mainly on price point differentiators -- so you could impose something like mainframe-style hierarchical storage management. In many cases, all of the tiers were provided in the same cabinet; a shelf of dual ported/small capacity FC drives as tier one storage (for data capture), and additional ranks of single ported/big capacity SATA drives as tier two (retention) and perhaps tier three (archive/backup). The FC drives cost (SAN connections factored in) about $189 per gigabyte (GB), while the SATA drives ran about (again with SAN connections factored in) $130 per GB. From the vendor's perspective, the one-stop-shop product lock-in was, to quote the MasterCard commercial, priceless.
What many companies are learning about consolidation is that it exacts a hefty price tag in ways they did not originally consider. For one, the need for environmental enhancements to their equipment rooms -- like heat exchangers at $50K per rack and new power and networking requirements -- that suddenly present themselves. For another, $18K ($36K per pair) caching appliances that needed to be pushed out to the branch office workgroups who had their storage "re-centralized" -- combined with some sort of non-standard wide area file system (WAFS) product -- in order to provide minimally acceptable application and file access performance to end users. Still another cost is the additional management wares that need to be implemented to oversee the centralized (usually SAN-configured) storage "tiers." These technologies are still very flaky -- and "data management" wares necessary to manage data across this "tiered storage" effectively are even flakier.
In many cases, consumers have been left to wonder where all of the economic gains expected from storage consolidation have gone, and whether they were ever there in the first place. Many are concluding that consolidation might not have been the product that their vendors claimed it would be.
Perhaps it would have been better if they had considered the 80/20 rule of LANs: 80% of data accesses are made by the users who create the data. The 80/20 rule IS a universal truth and provides a compelling reason to keep the data local to the users -- to consolidate not the disk drives that host it, but the management of those drives.
Summer is right around the corner. Many in the weather trade are suggesting that it will be one of the hottest on record. The power companies, particularly in the Northeast corridor between DC, NYC and Boston, are already wringing their hands in anticipation of dreaded blackouts in the region -- driven in part by increased electrical demands of power-hungry and increasingly centralized computing equipment and their air conditioning requirements.
If the 2006 Blackout happens, what benefit will your consolidated infrastructure offer? Start your disaster recovery planning today.
About the author: Jon William Toigo is a managing partner for Toigo
Productions. Jon has over 20 years of experience in IT and storage.
Copyright 2000 - 2006, TechTarget
Questions or problems regarding this web site should be directed to email@example.com.
Copyright © 2008 Art Beckman. All rights reserved.
Last Modified: March 9, 2008