I’ve pushed a proof-of-concept patch for dump archives to Commons Compress. There’s still a lot of work before it’s ready for primetime – I need to create a set of test dump tapes to exercise all of the functionality… but it’s good enough to extract from a 1 GB compressed dump file without any sentinels being tripped.
For June, 2011
The most well-known archive formats on Unix/Linux systems are tar, zip, and (increasingly) 7-zip. There is another format that has been around for decades but few people other than sysadmins seem to know about it – dump. It was historically written to tape (hence “dump tapes”) but it can also be written to CD-R and DVD-R media.
What the heck are ‘inodes’?
Unix filesystems, or more properly those descended from the Fast Filesystem (FFS), do not use filenames per se. Instead they maintain all information about each file in an inode. This structure contains the file’s ownership, permissions, timestamps, and list of blocks allocated to the file. Each inode is uniquely identified by the inode number, often abbreviated to ino.
A directory is a file that contains dirent information. In fact in ancient days you could open directories like files and read the dirent information directly. (Yes, I have done this.) These are simple records containing the file’s name, inode number, and type (file, directory, symbolic links, etc.).
One of the consequences of this is that you can create multiple directory entries that point to the same physical file. These are known as hard links, in contrast to the symbolic links that map one filename to a second filename instead of mapping a filename to an inode number.
N.B., you cannot create a hard link from one directory to another. You can only create hard links on files and special devices.
The benefits of ‘dump’
The defining characteristic of dump (and something that greatly affects its portability) is that it accesses the filesystem directly – it does not go through the operating system abstraction. This means that dump files can provide an unmatched level of fidelity. The specific concerns are:
- sparse files. Unix filesystems allow you to create a file with unallocated holes in it. This allows applications to make a much more efficient use of space, e.g., it can keep a hash table in its file without paying the penalty of allocating unused blocks. This might sound like an abstract concern but the odds are that several of the caches in your home directory are sparse files. The tar format supports sparse files but many clients do not support it.
- hard links. See above.
- extended attributes. Unix has long had a standard set of attributes for files and directories – owner and group, read/write/execute permissions for user (owner), group and other, timestamps, etc. A fair number of new attributes have been added since then, e.g., immutable (which prevents anyone, including root, from modifying a file) or secure delete (which tells the OS to overwrite the contents of the file before releasing the blocks back to free space).
- metadata. These are Access Control Lists (ACLs), SE Linux labels, etc. Most individuals don’t need this but when you need it, you really need it.
None of this is readily usable unless you’re using a filesystem-specific extraction program but it dump tapes haven’t been replaced entirely by tarballs.
A final benefit is that dump supports native compression. You do not need to perform a second compression step as with tar. It may support native encryption in the future.
The top-level format
Dump, like tar, was originally intended to be used with tape media so it uses a fixed-size block format and supports multiple volumes in a single run. It consists of a series of segments that start with a single block segment header followed by zero or more data blocks. The header contains a 512 byte map indicating whether individual blocks are real (and present on the tape) or sparse so the maximum size of a segment is 513 blocks.
Each tape segment header contains information that identifies the dump – a label, a timestamp, the name of the host and the raw block device and more.
The segments, in order, are
- TS_HEAD – start of volume marker. This is a single block indicating the start of a tape volume.
- TS_CLRI – deleted inodes map. Incremental dumps can capture that a file has been deleted.
- TS_BITS – allocation map. (Still not sure what this does.)
- TS_INODE – the data.
- TS_END – end of volume marker.
In addition the middle three segments can contain TS_ADDR blocks if more than one segment is required.
The data itself contains all directories first, then everything else (regular files, symbolic links, block and character devices and fifos/pipes). As mentioned above the files may contain unallocated blocks. The reader should treat these as zeroed blocks if it doesn’t understand sparse files.
The directory information consists of the inode header (providing ownership and permissions) followed by dirent records. Regular files consist of the inode header followed by the data.
As you might fear there is a risk that a directory inode may be seen before its name is known. In theory this shouldn’t happen but I’ve seen it in real files.
How do I read it?
In most cases you will want to use the companion restore program.
If you need a java client I recommend checking out the Apache Commons Compress library. I will be donating a java client to that project soon although I have no idea if it will be accepted for the 1.2 release or when that will happen. This library includes classes to read from zip files (including functionality that is not provided by the java.util.zip.*), tar files, cpio files and ar (unix library) files.