Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

bup-damage(1) [debian man page]

bup-damage(1)						      General Commands Manual						     bup-damage(1)

NAME
bup-damage - randomly destroy blocks of a file SYNOPSIS
bup damage [-n count] [-s maxsize] [--percent pct] [-S seed] [--equal] DESCRIPTION
Use bup damage to deliberately destroy blocks in a .pack or .idx file (from .bup/objects/pack) to test the recovery features of bup-fsck(1) or other programs. THIS PROGRAM IS EXTREMELY DANGEROUS AND WILL DESTROY YOUR DATA bup damage is primarily useful for automated or manual tests of data recovery tools, to reassure yourself that the tools actually work. OPTIONS
-n, --num=numblocks the number of separate blocks to damage in each file (default 10). Note that it's possible for more than one damaged segment to fall in the same bup-fsck(1) recovery block, so you might not damage as many recovery blocks as you expect. If this is a problem, use --equal. -s, --size=maxblocksize the maximum size, in bytes, of each damaged block (default 1 unless --percent is specified). Note that because of the way bup- fsck(1) works, a multi-byte block could fall on the boundary between two recovery blocks, and thus damaging two separate recovery blocks. In small files, it's also possible for a damaged block to be larger than a recovery block. If these issues might be a problem, you should use the default damage size of one byte. --percent=maxblockpercent the maximum size, in percent of the original file, of each damaged block. If both --size and --percent are given, the maximum block size is the minimum of the two restrictions. You can use this to ensure that a given block will never damage more than one or two git-fsck(1) recovery blocks. -S, --seed=randomseed seed the random number generator with the given value. If you use this option, your tests will be repeatable, since the damaged block offsets, sizes, and contents will be the same every time. By default, the random numbers are different every time (so you can run tests in a loop and repeatedly test with different damage each time). --equal instead of choosing random offsets for each damaged block, space the blocks equally throughout the file, starting at offset 0. If you also choose a correct maximum block size, this can guarantee that any given damage block never damages more than one git-fsck(1) recovery block. (This is also guaranteed if you use -s 1.) EXAMPLE
# make a backup in case things go horribly wrong cp -a ~/.bup/objects/pack ~/bup-packs.bak # generate recovery blocks for all packs bup fsck -g # deliberately damage the packs bup damage -n 10 -s 1 -S 0 ~/.bup/objects/pack/*.{pack,idx} # recover from the damage bup fsck -r SEE ALSO
bup-fsck(1), par2(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-damage(1)

Check Out this Related Man Page

bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
Man Page