“It’s so logical and so simple. Fat is the backup fuel system. The role it plays in the body is that when there’s no carbohydrate around, fat will become the primary energy fuel. That’s pretty well known.” – Robert Atkins
Recently I have been working with my dB Administrator to get some DR backups to our offsite location. Simple task made difficult with a few challenges: Size of the database, Bandwidth to transfer to remote site, and method of transfer. We got past part of the size issues by compressing the SQL database backups, but we still had issues getting the files to the remote site in time. So we switched from using Microsoft DFSR to using cygwin and Rsync. However there was another issue at play; We could never seem to get the full backup there in time.
Out bottle neck at this point was the available bandwidth needed to transfer and the files. The transfer would break and well it needed to do a checksum on the file to resume. At 200GB into it all this process to a considerable amount of time to only again fail.
That is when I got creative and suggested to use SQL to span/split the backup into multiple parts so that in the event the transfer would break and resume it only had to do some from just a single part of the total sum of data. You know what? It works. It’s been 2 weeks now and our backups reach the remote site.
And here is what you can do the split up your backups:
SET STATISTICS IO ON SET STATISTICS TIME ON BACKUP DATABASE LARGEDB TO DISK = 'D:BackupsNightlyCopyLARGEDB _01.bak' ,DISK = 'D:BackupsNightlyCopyLARGEDB _02.bak' ,DISK = 'D:BackupsNightlyCopyLARGEDB _03.bak' SET STATISTICS IO OFF SET STATISTICS TIME OFF
And now the backup is split into equal parts of the original backup file size. In my production environment we split our compressed database into 64 parts giving us about 4GB per file.
I hope this helps you,