The future of backup: from periodic to continuous

Maintain an always-on experience with continuous backup

Backup has been an essential part of IT infrastructure since inception and it is unlikely that will ever change. But with the IT landscape changing and threats increasing, are we still able to rely on the backup technology we currently use?

Backup requirements are changing and it’s important to understand if today’s backup technology can meet businesses’ evolving demands to drive modernization and digital transformation. Continuous journal-based protection is the future of backup, and it’s time to move from recovery to availability and restore to resume.

Requirements are changing

24/7

As organizations increasingly focus on digital transformation, IT has become a critical strategic partner to the business. The importance of keeping your systems up 24/7 has never been higher, but availability means much more than just having systems “up.” Users accessing IT systems are expecting the same experience every time, requiring IT to deliver high performance and stability no matter what time of day.

Ransomware

The threat of ransomware seems to be increasing and the impact of such an attack is enormous. It is not a question of “if” but it is a question of “when” you will face this challenge. Choosing between paying the ransom or suffering data loss can be difficult as both have cost and risk associated to them. Recovery using backup might cause up to 24 hours of data loss and can take days before all applications and system are up and running again.

Organizations can’t afford to sustain any data loss. To avoid the impact of data, productivity, and revenue loss, organizations need more granularity in recovery, while maintaining the same level of performance. IDC has determined that the average cost of downtime is $250,000 per hour across all industries and organizational sizes.

Besides data and productivity loss, damage to an organization’s reputation is at stake as well. Customers can easily share their frustration on social media where it can then be easily accessed by other customers or prospects.

Shortcomings of Traditional Backup Technology

When we look at the backup technology currently protecting our data—one of a company’s most valuable assets—not much has changed over the last 35 years. The basic process still remains the same: during off-peak hours, take a copy of the data that has changed in our production environments and store it in another, secondary location.

Performance Impact

The reason most take backups during off-peak hours is because copying all that data takes time and has a performance impact on production environments. Whether the solution is using agents in the operating system or snapshots on the virtual machines, the data is read directly from the production systems and is sent across the network—at best, the VMs are sluggish; at worst, they’re temporarily unusable. Every IT support engineer knows exactly what to check for when users complain about “slow” systems on Monday morning.

Complexity

Scheduling backups is often a resource-intensive task which requires ensuring, among other things, that backup jobs don’t interfere with each other and database maintenance jobs don’t impact the run-time of your backups.

In attempts to avoid the performance impact and to keep the backup windows as short as possible, traditional backup vendors have introduced distributed systems to handle the data being transferred (e.g. backup proxy, media agent). The larger the environment gets, the more likely it needs more of these systems and it’s often recommended to run these services on dedicated physical systems. Managing and sizing the backup infrastructure becomes a complex process needing dedicated specialists within the IT team.

Granularity

Due to the periodic nature of backups, IT teams are unable to meet the requirements for more granularity. And because of the performance impact on our production systems, backups can’t be made multiple times a day—thus the prevalence of the daily backup. However, this means when data needs to be recovered, the last available copy could be 24+ hours old and any changes since then are entirely lost.

Inconsistent Recovery

In today’s IT environment, applications do not consist of a single VM, but instead are spread across different VMs with different roles. Most of the time those applications also have dependencies on other applications, creating complex application chains. Successful recovery of those entire application chains depends on how consistent you can recover the individual VMs.

For example, with traditional backup technology, the jobs start at 11pm and are finished at 4am—this means that there could be up to 5 hours of difference between individual VMs. Having inconsistencies like this makes application recovery troublesome, complex, and time-consuming; this is the reason backup recovery time objectives (RTOs)—how long it takes to get back up and running—are so lengthy.

Backup Trends

As data protection is such a vital component of every datacenter, the list of products that support any datacenter strategy can be endless. Let’s focus on one of the biggest trends today: hyperconverged backup.

Hyperconverged backup consolidates compute resources, storage, and backup software into a purpose-built hardware appliance that enables scale-out architecture. By combining all these resources and features into a single solution— and adding an easy-to-use interface to manage and schedule your backups—they solve many of the complexities you experience when running more traditional build-your-own backup solutions.

But does the hyperconverged backup model address the requirement for more granular recovery? It successfully reduces complexity in the backup architecture, but still uses the same technology to protect the data; i.e. periodically copies the data from the production systems to a secondary storage target.

Continuous Backup: The Future

To ensure granularity without impacting production performance, the future of backup is the move from periodic backup to continuous backup.

Continuous Replication

By using continuous data replication you can deliver recovery point objectives (RPOs) of seconds by replicating every change that is being generated real-time. Backup should also rely on scale-out architecture for replication that allows you to protect environments with thousands of VMs. All operations should be performed with zero performance impact on the production environment to be able to deliver an uninterrupted user experience.

Granularity in seconds

All those replicated changes need to be stored in a journal which allows you to not only go to the latest point in time, but also offer you granularity of seconds, so you can safely rewind back to any point in the past, even up to 30 days ago. Recover files, applications, VMs, or even entire datacenters by simply pressing a virtual “rewind” button. Most recovery use cases that require granular recovery—such as file deletions, database corruption, or ransomware—only require short-term retention.

Continuous Data Protection

To avoid inconsistent recovery of multi-VM applications, they will need to be protected as a cohesive, logical entity. When creating recovery points, all the VMs should share the exact same recovery point so that when the application is recovered every VM that comprises the application spins up from that same cross-application recovery point. All of this should be guaranteed no matter where the VMs are located within the infrastructure.

Application Consistency

Combining always-on replication and granular recovery truly enables continuous data protection and allows you to move away from the periodic point-in-time copies used in traditional backup technology. For example, if an outage occurs at 17:26, CDP enables restoring data from 17:25 rather than a backup that is likely at least 4 hours out-of-date; all the data written since the 12:30 snapshot is now permanently lost.

Long Term Retention

Besides offering flexible options for short-term (up to 30 days) recovery scenarios, businesses might also have compliance requirements to store data longer than 30 days. Long-term retention data has different requirements as it relates to storage and recovery times, but needs to be an integral part of your data protection platform. As with short-term backups, copies shouldn’t come directly from production systems as this impacts performance and often disrupts user experiences. Using a technology that can benefit from the data already protected by CDP technology, combined and stored in a journal, allows you to offload point-in-time copies to secondary storage targets as often as you want.

Offer continuous data replication without impact to performance.

Follow Plow Networks: Twitter, LinkedIn, Facebook, and Instagram

About Plow Networks

Headquartered in Brentwood, Tennessee, Plow Networks is a Total Service Provider (TSP) with several distinct business practices that, when consumed together, offer our clients a unique, best-in-class experience. We give organizations peace of mind, valuable time back and the economies of scale that come with having one technology partner that is focused on exceeding their expectations with every engagement.

Contact

Plow Networks
(615) 224-8735
marketing@plow.net

*This information is brought to your by our data backup and disaster recovery partner, Zerto.

Scroll to Top