Originally Posted by diddsen
can you please explane a little bit the following tweak?
i know the thread for this, but theres not much infos about
Reduce flash writes by disabling keeping track of last access time
what is better, enable/disable?
Short: It is best to enable this tweak ( = make sure the checkbox is checked). This is one of the most important disk speed tweaks related to "real life I/O performance" in entire Linux and derivatives, and is well documented.
"atime" stands for access time. Linux based systems keep track of when a file was last read. This means that every time you access the file, the system writes to disk the current timestamp, the last-access-time of the file. You probably know that systems use buffering, i.e., if you read a file a lot, the entire file may actually remain in RAM, so when you read it again, you read it from the much faster RAM then the slow disk - a good thing! With "atime" enabled (the original Linux default) this means that even when reading the file from RAM where it's cached/buffered by the operating system, the system will still have to write the current timestamp to disk. As you will probably be aware, writing is (usually) slower than reading, and writing will slow down other processes reading from the same disk. The "atime" (atime enabled) setting causes a LOT of (mostly) useless disk access (and write, not even read!), and as you are probably well aware, the number one bottleneck in performance computing is disk access.
Enter "relatime", relative atime, relative access time. This is the default setting on many modern systems, including the Galaxy S. It wasn't long before people realised that doing a write on every read is pretty stupid from a performance perspective. Relatime was invented. The logic is that most software that actually cares about the last access time, only wants to know if the file was accessed since it was last created/modified. So relatime only saves the current timestamp if the stored access time timestamp isn't already newer than the time the file was last created/modified.
Last but not least, is "noatime", no atime, no access time. The option CF-Root sets when you enable this tweak (if you do not enable it, "relatime" is used). No last access (read) time information is stored at all
. There are very few programs that actually care about last access time on a file system level. The chances you are running one are very, very slim. Most programs that do really care (and
have a good reason to) are "high security" tools and such. You might think "but what about media players and such, they may keep track of when I last played a file ?". Yes, indeed they do, but they do so "manually" in their own databases, as depending on last access time stored by the filesystem is rarely a good idea from a developer perspective (again unless you are using "high security" tools). So what the "noatime" setting does is say to the system: "Hey you, you know what? I really don't care when this file was last read. I don't want my performance to suffer constantly by storing information I will never actually need! A read is a read, a write is a write, and I don't want you to
write everytime I want to
read!". It's the ultimate last access time optimization - we don't do last access time at all!
Note I call this a real world optimization tweak, as you will very rarely see this make a difference in synthetic benchmarks, due to their nature and how most of them actually test disk performance. Also, while this is no doubt a good optimization to have, it does not nearly make the same amount of performance difference as it does on "spinning" disks, versus these flash disks, due to the latency difference between the two.
Originally Posted by paratox
by enabling stagefright and disabling I/O no-a-time setting. but this will break some video codecs and you will see no real speed improvement, only higher quadrant score.
Just by enabling stagefright, you mean. Disabling the no-a-atime tweak makes it slower, you want to turn the no-a-time tweak ON (the checkmark checked), not off.
Also, Stagefright (a new Google "media framework") can in fact make a real world difference, but it depends on your usage. If you see a real-world difference will depend on which codecs you use in what way. Samsung did a terrific job in optimizing media playback by offloading it to dedicated silicon instead of using software decoding, and Samsung is the big exception here! Unfortunately it seems that (right now) not all codecs "hardwared" by Samsung are compatible with Stagefright. Where you would see the difference (CPU usage wise) would be in media (audio, video) encoding and decoding. Since Samsung is already great at this, the "observed" difference is likely to be small if existent at all if you use Samsung stock apps. It might certainly make a difference in both CPU as well as battery usage during media playback using stock AOSP software or such.
The reason Quadrant jumps in score is because the media decoding test scores much higher when this new optimized media framework is enabled. I also think it is pretty safe to say that Quadrant doesn't actually make use of Samsung's dedicated silicon for these decoding tests. As with the FPU test that surrounds the Snapdragon vs Hummingbird controversy, this specific Quadrant test returns results that make it account for a completely out-of-proportion part of the total score, and is thus completely useless and irrelevant.
That being said, if some codecs weren't incompatible with Stagefright, I would have it always enabled, just in case I (or a program I use) needs to decode something covered by Stagefright but not by Samsung, for optimal performance.