Kashya extends its data-recovery parachute
New version of data protection platform allows consistent cross-application recovery
By Mario Apicella
March 23, 2006
You may remember data-recovery solutions vendor Kashya from some of my previous columns. Its KBX5000 Data Protection Platform combines CDP (continuous data protection) and snapshots with a smart set of host agents that intercept critical transaction data on the fly.
Kashya also offers an interesting alternative to that scenario: You can deploy the KBX5000 without installing agents on your servers, which minimizes possible downtime and application incompatibilities.
In that configuration, Kashya deploys more applications on its appliance instead of agents on your servers. The applications, communicating with intelligent switches from Brocade and Cisco, capture critical data transfers between servers and storage targets. A more detailed explanation of how that works with a Cisco fabric is here.
Last week Kashya announced a new version of its KBX5000. "What the new version, R2.3, brings forth is true integration between our disaster recovery and CDP application," explains Rick Walsworth, vice president of marketing at Kashya. "That means that at a remote site, I have now CDP recovery capability rather than just incremental snapshots."
In addition to offering more granular recovery at a remote site (a feature that previously was available only for local data), the new version integrates with Microsoft VSS (Volume Shadow Copy Services) and can better track application-driven changes to SQL Server and Exchange databases.
A similar application awareness applies to Oracle databases and generates what Kashya calls "application bookmarks," essentially tags that identify consistent recovery points. Users don't have to change their applications because those bookmarks are created automatically.
Obviously, having ready-to-use, easy-to-identify recovery points simplifies rebuilding a corrupted database.
"In case of corruption, a DBA can go back not to just any point in time before the corruption occurred, but can roll back to the exact point in time where there is a consistent image of the database," Walsworth says.
"We support also other databases," Walsworth points out. "But we have this tight integration only with Oracle, SQL Server, and Exchange."
What if disaster strikes just after a major business deadline -- say, the end of the month or the end of the quarter? How can you make sure that multiple databases are not only individually consistent but also synched at the same cutoff date?
Version R2.3 has that covered: With a single command, Walsworth says, users can tag multiple databases, essentially creating a virtual fence that identifies a specific date or business event. That fence helps during recovery but is useful in other situations, too, such as creating a separate environment to respond to auditors or legal queries.
The KBX5000 R2.3 has many other interesting new features, but the one I want to point out is the ability to do nondestructive tests of disaster-recovery fail-over procedures while maintaining redundancy, which obviously saves time and encourages more frequent testing. "One of our customers has cut testing time from six hours to 30 minutes, and they are still protected during fail-over," Walsworth adds.
The new version of the KBX5000 should be available at the end of April, at a starting price of about $120,000 for a basic setup with CDP and replication. Adding more features or choosing a clustered configuration will set you back more, but it's still an investment worth considering for many companies if your home-baked disaster-recovery solution makes your auditors roll their eyes in despair.
Join me on The Storage Network blog with questions or comments.
Questions or problems regarding this web site should be directed to firstname.lastname@example.org.
Copyright © 2008 Art Beckman. All rights reserved.
Last Modified: March 9, 2008