Category Archives: it

s3bkbk – Initial Release

I am announcing the first release of s3bdbk, a simple script to do incremental backups of block devices into the Amazon S3 storage cloud. For now, code is available just at github under an MIT license:

And now for a little motivation:

In our virtualized environments at work, it is much more convenient to backup at the disk image level. In the event of a catastrophic failure where an offsite restore is required, I don’t really want to have to spend time installing a new OS image and getting it configured well enough to pull in the restored data. Our offsite backup needs are also relatively simple. Nearly all of our critical data is already consolidated onto one or two virtual images.

Existing solutions all seem to ignore the fact that disk images are so easy to work with in a virtualized environment. However, with an entire image, it just isn’t easy or practical to get it offsite with any existing tools:

  • It is difficult to work with large images in S3:

    This has improved lately with multipart uploads, but many off the shelf tools you might want to use in an emergency won’t necessarily support the feature.

  • Incremental backups of block devices is harder:

    With a filesystem, tools like rsync can with some ease narrow in on which subsets have changed without having to read the entire filesystem. Granted, with some extra patches, you can point rsync at a block device, but it only supports an identical block device as the remote target.

  • Naive backup rotation is very expensive:

    Simple incremental backups work at the file level, that clearly is not sufficient for a block device. Something like rdiff-backup will store reverse binary diffs, but it is somewhat brittle, is hard to manage your older backups, and doesn’t work on block devices.

s3bdbk solves these problems by using a simple block format written to S3 (or a local directory). The block device is:

  1. Chopped into chunks (32meg currently)
  2. Each chunk is hashed to a checksum, and a canonical name is created that includes the backup name, the block number, and this hash
  3. If the canonical block name already exists in S3, the chunk is done already, otherwise we upload it.
  4. Once all chunks are uploaded or already present, we write a “manifest” to S3, which lists all the blocks in this version in order
  5. As a nicety, we store a “current” file which points to the most recent manifest.
  6. Finally, if a limit on the number of incremental backups is specified, a weighted random purge is run to remove older backups while maintaining probabilistic spacing.