There are neither warranties nor guarantees for freedup working correctly. In principle freedup only knows about linking. Therefore the maximum risk is to link different files. During development many precautions were taken, but I have to emphasize that this risk exists. Only when using interactive mode you may delete files in a two step process, too. If you detect any possible source of misbehaviour in freedup, please report it for the sake of all users.
In principle freedup always searches for files of identical size and compares them byte-by-byte. The only exception are "extra styles", where the tags (details see next chapter) are intentionally skipped. Before files are compared byte-by-byte you might apply restrictions, like being owned by the same group or user, having the same permission or whatever the options allow you. Files that match in content and fulfil all required prerequisites are linked in the demanded way.
For more details please have a look into the source code or ask the author.
This concept was introduced in version 1.1 due to the fact that I wanted files to be linked although they differed. I am talking of mp3 files where the tags showed minor variations. First I considered retagging all files, but I would have to remove either all or complete all tags (n.b. MP3v1 tags are at the end, MP3v2 tags are at the beginning of an mp3 file, both are optional).
The extra style now should compare the essential file content, i.e. the mpeg encoded sound part in case of the mp3 files. Currently the following rules are established:
Please note, that these styles change the behaviour according to the file contents. The change the size of the compared contents, but this does not affect the options that belong to the files, like ownerships or file names.
If you like to contribute, this is quite simple. There are source files for each style. Start with a copy of my.c and my.h. Rename the functions, fill in your way to evaluate the irrelevant bytes at start and the trailing ones, as well as a way to find size and magic. Add a matching line to the extra[] table in auto.c, compile, test and submit to me.
Hash functions should speed up freedup since they avoid comparing files that have been scanned before (and might differ in the last characters). But freedup is slowed down, if files of the same size differ early. Then you should switch the hash function off, which is now the default. If most files of the same size are likely to be identical (more then just two), it probably pays to switch hash functions on. There is an internal hash function that allows some interesting speed enhancements (see below). External hash functions are kept, since they might be interesting to check the internal one for correctness.
The new algorithm records hash sums on the fly (starting in version 1.3-1) and is in worst case - depending on cpu - half as fast as without using hash functions. When reading files the hash function is calculated until the comparison fails. The hash context is stored until the next comparison takes place and if it fails at a later block, the hash calculation will be continued where it stopped earlier. Since reading and comparing files works with data blocks (predefined 4k) the hash values can sometimes be calculated although the comparison fails.
time ./freedup -x mp3 --hash ? -ni /testdir 7856 files; 1 match; average file size 46MB; 50% smaller 4k; 2900 BogoMIPS 2852 classic hash sums to avoid 3411 byte-by-byte comparisons. | ||||
hash support | Parameter | Real Time | User Time | Sys Time |
---|---|---|---|---|
without hash support | --hash 0 | 2m04.646s | 0m00.599s | 0m03.455s |
with classic hashsum | --hash 1 | 5m31.221s | 2m21.914s | 0m16.303s |
with advanced hash | --hash 2 | 1m59.720s | 0m06.006s | 0m03.515s |
time ./freedup --hash ? -n /mp3dir 7919 files; 0 matches; all around average file size 4.5MB; 1400 BogoMIPS 4502 classic hash sums to avoid 3819 byte-by-byte comparisons. | ||||
hash support | Parameter | Real Time | User Time | Sys Time |
---|---|---|---|---|
without hash support | --hash 0 | 5m21.690s | 0m15.130s | 0m25.560s |
with classic hashsum | --hash 1 | 45m14.048s | 36m33.470s | 2m29.380s |
with advanced hash | --hash 2 | 10m01.311s | 6m28.610s | 0m28.150s |
time ./freedup --hash ? -x mp3 -n /mp3dir 7919 files; 456 duplicates; all around average file size 4.5MB; 1400 BogoMIPS 4524 classic hash sums to avoid 3425 byte-by-byte comparisons. | ||||
hash support | Parameter | Real Time | User Time | Sys Time |
---|---|---|---|---|
without hash support | --hash 0 | 6m48.276s | 0m18.590s | 0m28.600s |
with classic hashsum | --hash 1 | 49m35.108s | 37m06.450s | 2m47.400s |
with advanced hash | --hash 2 | 12m33.688s | 6m51.530s | 0m31.090s |
As a consequence of these results, the advantage of hash functions is not obvious for most environments. I assume that there are situations, where many files have the same size and quite similar contents. Then one should switch hash function usage to the advanced mode. But since I do not intend to rely on hash results without having byte-by-byte comparison, I changed the default value since freedup 1.3-2 to off.