Here he changed output buffer period size from 1024 to 448 because in early implementation didn't work fine at 1024.
Now he changed back from 448 to 1024 again saying it's not necessary anymore and higher buffer size will save up more battery.
What we know here? Reducing period size to 40-50% still works fine without trouble except consuming little more battery and smaller period means smaller buffer size leading to lower latency too.
Same goes for Galaxy Nexus. Let me explain here
Galaxy Nexus uses audio library code from tuna base. so let's see tuna code changes that has comment about latency value.
This ones is pretty obvious. The goal is to have lower latency and optimizing audio performance to perform well at lower latency is the most important of all. Not to mention that early period size of Nexus S having 448 which is less than 50% of 24x44 = 1056 in Galaxy Nexus's low latency mode right now. Specs in Galaxy Nexus should be like 2-3 times better but Nexus S can handle latency at half fine until increase buffer patch? Sound quality regarding latency size is subjective and I already said that I'm not giving any promise to sound quality improvements though some may perceive noticeable changes in what they hear. I want to have lower latency playback and and I feel snappier having it.
Then no matter what size of this "buffer" is, it always contains the same data for the player to fetch and play... so I don't see how "low latency" can improve "sound quality" or improve "sonic fidelity" unless there is a degradation of data in the buffer over time, which sounds strange to me in terms of computing
If anyone could come up with some explanations I would be really appreciated.