No one wants a disaster. They are pretty difficult to prevent however. But you can make a disaster, well, less disastrous with careful planning and a good backup system.
For IT, a disaster response plan is essentially an insurance policy. Historically, IT organizations faced a conundrum when it came to disaster response in term how much to spend – too little, and you didn’t recover (and what was spent may well have been wasted anyway) – but too much and you might be seen as overspending and “wasting” resources, according to Richard Scannell, Executive Vice President at RiverMeadow, which develops SaaS. “Cloud solutions help with this issue in a variety of ways – from creating sufficient physical distance from the primary data source that there’s a low chance of the disaster affecting both sites, to changing the economics associated with the ability to spin up an alternate environment,” he said. “There are two distinct issues in recovery – one is sourcing appropriate computing resources, the second is recovering the data – both of which can be addressed with cloud-based solutions.”
Using a cloud-based backup strategy can significantly minimize downtime and give organizations uninterrupted access to their data and applications. “But simply backing up to the cloud isn’t enough, you also need to make sure you have the necessary resources and expertise to make sure the backups are properly configured and maintained so that there are no surprises when it’s time to restore your data,” Jennifer Walzer, CEO of BUMI, a disaster recovery company, was quick to point out.
Ideally, your data should be backed up daily and perhaps even more frequently for critical files that change multiple times per day, Walzer said. “Whatever schedule you choose, it’s always best to automate your backups as opposed to performing them manually and running the risk of missing a backup.”
You should also consider turning to innovations like vault integrity checking and vault self-healing help in emergency backup situations. The vault is where the backup data is stored. Vault checking and self-healing checks the integrity of the backup data and make sure nothing is corrupt.
“If you’ve ever tried to perform a data restore, only to find out the backup files were corrupted then you know the importance of having good integrity checks in place for your backups,” said Walzer. “File corruptions are a common occurrence with many backup solutions and can be caused by a number of factors, including hardware failures, software application problems, and file system errors. A good backup solution will incorporate several layers of protection to help guard against such corruptions.”
When backing up to and restoring from the cloud, Internet connectivity isn’t always reliable, and there are always hiccups on the network which makes it very easy to experience packet loss when pushing lots of data between locations. Self-healing technology helps validate that the data actually came across correctly and if not, will correct itself so that recovery of that data won’t be a problem later on,” Walzer said.
Walzer has practiced what she preaches. Based in New York City, her office building was submerged in 35 feet of sea water after Hurricane Sandy, which destroyed a lot of the building’s infrastructure. However, Walzer had backup servers in other locations, like Canada, and both her company and her clients were able to remain operational. For that reason, she said you can never have too many backups in place. At a minimum, you should always use a local backup in conjunction with offsite backup. More ideally, backups should be at least 50 miles away, hopefully out of any natural disaster zone from the primary servers. In this way, cloud backup will help protect your organization in the event of physical damage or an outage, while local backups allow for quicker, LAN-speed restores.
Sue Poremba is a freelance writer focusing primarily on security and technology issues and occasionally blogs for cloud service provider Rackspace Hosting.