Skip to content

Disaster Recovery versus The Cloud

The strategic roadmap for enterprise architecture is to take advantage of the elasticity and the flexibility of cloud computing. The concept of not requiring active-passive architecture within a company’s infrastructure saves millions of pounds that when analysed is not achieving much return on investment, but when the worst happens is it wrong not to have a disaster recovery strategy?

We need a disaster recovery strategy

Five years ago, a current FT250 company started to plan for the 2007 IPO. For the company to be listed on the Stock Exchange a commitment was made to the City to provide an infrastructure that could deal with four times the average load on all tiers of its infrastructure as well as providing two new data centres that would provide resilience in the event of failure. The project was delivered for five million pounds and its legacy still exists. Over the past three years since implementation a mirrored architecture has been dormant within the fail-over data centre. To this date there have been only a handful of times when this hardware has been used while providing licensing costs to the IT function.

We really can’t just these figures for disaster recovery.

Once the numbers were assessed and the costs for supporting an inactive disaster recovery site was established, companies begun looking into an active-active configuration. Conceptually this appeared to tick all boxes with regards to making the most out of the available hardware; the load was shared across each data centre. Just like the failings of socialism, a leader/coordinate/master must evolve. The need for a master removes resilience and exposes your architecture to risk if the master node fails.

A way forward in justifying costs for disaster recovery

During my time in academia, there were a number of research projects examining the concept of Grid computing. The goal of this concept was to reduce hardware costs by using the power of many small servers to replace the need for one. This provided a scalable and cost effective solution to IT infrastructure. Since 2005, Grid computing has evolved into a commercial offering provided by a number of providers such as Microsoft and Amazon. As it is marketed, Cloud computing gives customers the ability to create and destroy environments whenever the need is required; thus reducing the need for expensive hardware to be left redundant that is continuously depreciating in value and placing additional burden on budget. At the moment, environments can be created and be available to live traffic within twenty five minutes. Why should an organisation pay for an architecture that supports four times its average load but is only utilised for a few hours each week and is replicated in a separate data centre that is available only when failure occurs?

Letting others deal with your disasters

Risk analysis, assessment and control are crucial to any business. By controlling your own assets and enforcing your own policies on them, companies are in control of the risks entailed with IT infrastructure. With the offerings provided by Cloud computing providers you are losing control of these risks and hoping that the service level agreements in place are met. Companies feel insecure with the loss of control, and with the much publicised outages on Clouds over 2011 and 2012 feel at risk with this lose in service that is out of their hands. Microsoft prophesies that it provides 99.95% uptime of its hosted Cloud environments; however companies aim to achieve at least 99.99%. So with this loss of control exposes a company to additional risks that they cannot manage.

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from James McGuigan

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version