One of the biggest hurdles to introducing cloud computing is the concern of businesses to lose data in the cloud. This is shown by current surveys and studies, including the Cloud Monitor 2012 from BITKOM and KPMG.
That this fear is not entirely unfounded, the numerous cases of data loss and data breaches of recent months. Data loss in the cloud can be prevented if the strategy for backup and data recovery is right. But often lack the appropriate precautions for data recovery.
A survey of 367 companies by Kroll Ontrack (PDF) makes it clear that every third company fails to regularly review its own data recovery guidelines. After all, 62 percent of the companies surveyed already use cloud computing.
The CA Insights – Data Protection and the Cloud study also came to a similar conclusion , according to which only 35 percent of German companies regulated data recovery and data recovery. There is a clear need for action in the cloud backup, recovery and data recovery strategies.
Clouds are not always the salvation
If local data is lost, recovering the data from the cloud can be useful when using online backups. Various cloud services also explicitly provide data recovery / recovery capabilities in the cloud .
But when data gets lost in the cloud, many users believe that cloud providers have extensive backups and can restore the data. 83 percent of those who use a private cloud rely on their data and applications being sufficiently protected in the event of a cloud failure. 94 percent of public cloud users rely on service-level agreements (SLAs ) for data privacy voted on by the vendor, according to the CA study Insights – Data Protection and the Cloud.
Unfortunately, this is partly a dangerous fallacy. On the one hand, examples from the past show that cloud data was finally lost in the event of a failure. On the other hand, the user company remains responsible for the availability of the data, since cloud computing generally involves order data processing under data protection law. But there are different approaches to optimize the rescue of cloud data.
What are considered best practices?
The following is a list of best practices in terms of providing HA in the cloud. It is not completely comprehensive, but it may also apply, to a lesser degree, to data centre architectures as well.
- Distributing load balancers across Ads, beware of single point of failure (SPOF) in the architecture: two is one and one is none
- If the cloud provider is not providing redundancy across Ads and at least three copies of the same data automatically, it may be a good idea to re-evaluate the provider decision, or contemplate a service that does so
- Easy to get in, easy to get out: it is necessary to have the certainty that in case it becomes primordial to move or redirect services, it is possible to do so with minimum effort
- Implementing extra monitoring and metrics systems if possible, not to mention good integration: also if possible, off-the-shelf, through third parties that can provide for timely alerts and rich diagnostic information. Platforms such as New Relic, or incident tools such as PagerDuty, can be extremely valuable