Hey Ryanza in principle I agree. It seems like rfs is not killing data so something isn't understood, including the "fix". However, shouldn't the journal always get cleared on bootup? Isn't that the whole point of a journal.. that at bootup it's used to fix failed writes and then can be zero'd out? The journal corrections should not become permanently part of the files. In fact for example you can delete an ext3 journal on a clean drive and you just get an ext2 filesystem. Am I wrong?
edit: PS I'm not saying you're wrong or it's that simple, but that the journal should not be slowly building up making rfs slowly slower, if it's working in a reasonable way.
Nope.. because of how FAT works, you can't easily add things like symlinks and permissions. So for every file you have, RFS creates a hidden metadata file. It puts the journal, link, permissions and everything else in here. It doesn't appear to get cleared, ever (but I don't know enough to guarantee this). The whole method used for metadata in RFS is going to be slow because of the parsing required.
Doing a FAT32 disk check sometimes seems to wipe out these metadata files, which is what causes the problems (files lose their permissions and so on).
So basically, RFS DOES definitely require some type of disk check to be run. Unfortunately, FAT32 disk check doesn't appear to be compatible.
To sum up, RFS is bad.