Does this mean you actually have a qualification, or you just happened to have done some technical work? For what its worth, I have a Bach Eng (mixture of electronic and software).
I've been doing digital design most of my life.
You attempt to confuse the real issue here. The flac codec, as has been shown across many many tests, produces a bit perfect ouptut. As many have tested, the bitstream output from the sound card is perfect to the original source. If there was a bug, then it would need to affect the actual ouput. Of course, what you are referring to on the bugtracker are things which are metadata related, or cause the software to fail completely. i.e. a buffer overflow is listed in that changelog. What you have failed to mention, is that none of the changes are affecting the integrity of the core encoding and decoding which has been shown to have excellent integrity.
No different to using a zip file on a computer.
The real test is, what you get in, is what you get out which holds true with flac. In fact, the encoder even tests it. It is in fact how I found out I had a faulty memory stick in my desktop - because the encoder was producing erroneous outputs and was telling me that it was!
So you have used a strawman argument to misrepresent the bug argument which doesn't hold.
Look at the "sync" bugs, those are more interesting. Yes all the lossless codes have excellent integrity, that's why they can be called lossless, but that doesn't mean they are immune to issues. I'm not stating one codec is better than the other, nor am I stating compressed vs uncompressed is better, but do note the increased load on the BDP. CPU is mostly at idle with AIFF and WAV, but it utilized with ALAC and FLAC. For most systems, it's probably not a big deal, but something that is optimized like the BDP, perhaps it plays a larger role. You also need to look at the entire software and hardware stack. As the application calls the codec to convert the file to PCM for playback could be called differently with the application that encodes the CD and performs it's checks. The encoder could use the provided executable under the hood while the media player could just link the library to offer a more memory efficient approach. Perhaps we could even discuss all those switches and options... As for zip, tar, bzip2, etc, there's bugs in those as well, rare, but they are there.
Because one transport is shown to be bit-perfect with WAV files, that doesn't make all other transports that use WAV bit-perfect. About all it states is that other transports could also be bit perfect with the wav codec. Just like if FLAC sounds the best on one transport, doesn't mean it will sound better on another transport, it just means that the possibility is there.
We shouldn't be hung up on the codec though, they all have have damn good odds of being bit perfect. Just are they bit perfect for a given transport. Again, one should look at the entire solution for the transport.
You can google about the quality of various AAC codecs and the issues there, but that is lossy format and will never be bit-perfect so might be a little uninteresting. So you've referred to a lossy codec. Completed unrelated to the topic here and has no bearing whatsoever. As for the clicks between tracks, or hanging - then you are referring to potential glitches in the BDP-1 implementation - and nothing which happens to change the ouput - but are an outright fail. In particular, the pops between tracks has to do with an implementation of gapfree playback which has nothing to do with the codec. The codec is likely working 100% perfectly, but there is an unrelated glitch causing an issue. Not to mention, the way the codecs work - they aren't issues which question the integrity of the output of the codec here. You are confusing general usability and device bugs with the integrity of a codec. Different issues.
I will just restate this
" You can google about the quality of various AAC codecs and the issues there, but that is lossy format and will never be bit-perfect so might be a little uninteresting.
As, I stated with calinet6, lets not try to talk past each other. Just want to make it clear that i'm not questioning the integrity of any of the lossless codecs.
The issues with AAC (yes the lossy codec and shouldn't be confused with ALAC) is an interesting google. There's three main codecs, Apples, Nero and the open source version, guess which on you don't want to use... The parallels here could be the same, perhaps ALAC, which has been recently open sourced, might have the same issues, many codecs were designed by reverse engineering, not from a standard. Are there multiple versions of the FLAC codec? I don' believe so but guessing they would at different levels. AIFF and WAV have various implementations, but it's rather hard to screw up the data if all you are doing is building a container.
The glitches and hangups could entirely be how MPD interacts with the codec and the sound card driver if there is a failure. Glitching has been common in several file and disc transports the PCM stream is muted between tracks or at the end of a playlist. I've only noticed this with ALAC files on the BDP and not any other device. Where do you think the issue is at? If the MPD manages all codecs the same, then you would assume the ALAC codec is buggy, but might be how MPD controls the codec? As for the hang, initially I assumed it was the metadata but since the hangups were in the middle and identical spot on the song, it's most likely an issue with the interaction of the codec and application. I did convert that ALAC file to AIFF and had no issues, so doubtful it was the file. I had several cause these issues, but once i could a resolution, I moved on.
What on earth are you talking about? What are these regressions? Does the zip format, for data file compression exhibit this behavior?
Sorry, I probably should have stated "regressions tests". When you design that ASIC (aka those tiny black chips), regressions become your life. Whether it's an encryption, adders, multipliers or other algorithms that has been tested with formal verification, or the serial busses that we've all come to love, they all go through months if not years of testing. You don't get many chances to re-fabricate an asic, mistakes can cost around a million. At least the software folks I've worked with, they also perform the same testing for their applications and device drivers. If you have access to linux/osx/cygwin, feel free to download gzip's source code and type "make" and then "make check", they too perform regressions tests. Most codecs will offer the same. In my field you understand that the bug rates are always higher with newer designs and older designers get the boot when they don't make performance or power numbers, it's a fun balancing act.
Actually no.... Plenty of people have verified the flac codec (amongst others) actually does what it says for the basic encoding and decoding functionality. Unless you have a hardware fault, it will work. Thanks to the error checking in the format - it will actually let you know if there is any loss of integrity! Don't try to confuse this with an academic argument to confuse the issue. This has practically been tested and verified.
Again, the point i've been trying to make is that it's not just about the codec, its' the entire software and hardware stack both upstream and downstream of the codec.
A zip file doesn't just randomly not work when you decompress it. These things are not as complex as you are making out.
Try using different OSes.. I've had a few gzip files not decompress and had to find go find the original OS and gzip version to decompress. It's not as simple as some make it out to be... Also consider resource limitations you have to watch out for when compressing and decompressing a file and those are outside of known good algorithms. BTW the issue was between a Solaris and Linux machines... annoying but easy enough to work around.
Mature? You use the word as if software grows like a child - it does not - it is a discrete process... By nature of the way software is designed, especially something extremely mathematical such as the encoding and decoding, it can be mathematically proven that the core algorithm is bug free. Sure,might be some bugs to do with tagging etc, but that does not affect the core decoding and encoding functionality. If the lossless formats aren't bit perfect - they are found out very quickly.
I think most IT, SW and HW folks will agree that software and hardware matures. Consider the alpha and beta releases of software. The version number for MPD would even make me believe it's still in beta or that they are being rebellious by not releasing a 1.0 product. Rolling out a new product or OS build to your customers can be a handful when dealing with those initial bugs. Consider the age for zip and gzip compared to flac and alac, zip and gzip both are more mature than flac and alas, but they still get updates and still have some pretty basic bugs. i.e. 1020 characters in a filename can crash gzip.. Not sure you can have a filename that long on windows, so zip is probably in the clear for several more years.
What features do most codecs and media players now have that can alter the sound? Just because the codec has been mathematically proved to correctly encode/decode doesn't mean there aren't other forces at play. (i feel very repetitive at this point....)
Flac is no different to a zip file - using your theory, every so often a zip file should just randomly fail - because there is a bug. I have never had a zip file compression, or decompression fail (unless the archive is corrupt through other means).
NSEU, great thing to google. This is why workstations and servers use ECC memory and checksum data, you can get a neutron induced single event upset (random bit-flip) every few hours on a plane. It's really not that black and white.. Of course we are talking about music codecs where the data is stored in memory for milliseconds, and clearly not a concern.
Because they want to. There are some squeezebox users who do some ludicrous stuff swearing by better quality. Most of it is people 'tweaking' for the sake of it and wanting to hear something better because they did something. A purely psychoacoustic effect.
The truth is often far more boring than people like - and in cases such as the above, psychoacoustics is often the boring answer. The other, less boring answer is that the particular user has a hardware fault.
I agree PEBKAC and snake oil is rampant in this industry. I'm always baffled why folks purchase expensive power cable (RLC), force their mac minis to boot in pure 64-bit mode (impacts cache?) , spend thousands of dollars for a .025 ohm resister that they mount in series with their speaker cables (again RLC). At first glance I've ignored the claims. I don't see how a pricey power cable could change the sound if plugged in to the wall, but perhaps there's something there with a power conditioner that has ample storage caps? Why pure 64-bit vs mixed mode with OSX, this really baffles me, benchmark has proven that you can get bit-perfect output from the optical outputs on mac... I do understand why the .025 ohm resister can impact the sound on speakers, just like using a longer cable.
As per my zip file comment, if the bugs you suggest above existed, they would have been found out a long time ago. Anything which affects the core functionality. Now you are confusing the issue by comparing different ripping drives. Well that is a real, but different issue. All rips are not equal, but for any given rip - the flac output and wav output are.
With my previous response, i've stated that you have to look at the entire software and hardware stack...
A sound card doesn't even know what a flac or wav file is (usually). All it gets is a PCM stream. The application will simply use the codec for whatever container, decode it - get the output and send it to the sound card. So the application actually handles the file format. The output from the application to the sound card (or to the driver in most cases) will be identical. To suggest there is a difference is to suggest that the data must inherently be different somehow. At this stage of the process, it won't even affect jitter as the sound card, or output will buffer the stream anyway.
Yes, i've stated this in my first post of this thread and as well as others in defense of all these lossless codecs should be bit perfect.
I haven't really covered jitter much. However, the main influence on jitter will be the hardware used in the output stage, or sound card. The data from the software decoding is buffered before it is output making the codec used irrelevant. I also saw the mention that cpu usage may effect noise etc... In hardware devices such as the BDP-1, there is usually a chip doing hardware decoding of the codecs before outputting it. Either way, if it is done in software, it hardly requires any processing power anyway. I'd be surprised if you can even measure any theoretical power increase.
Very few computers and sound cards have codecs to the decoding these are all libraries that you can install. For some formats, you can select between various codecs. Most will support PCM, DSD and a few now AAC and default audio formats for H.264 (which might just be WAV, DSD and AAC). From a shell (cygwin for windows folks) log on to the BDP (ssh firstname.lastname@example.org, password is bryston) and type top, you can monitor both load and cpu utilization. Play a WAV, AIFF and FLAC, you can see the utilization vary, it even varies per song and with hi-res. WAV seems to be the most efficient followed by AIFF. You can actually measure power consumption on most newer systems today, not sure about AMD ALIX motherboards though. I can check to see if the hardware is there, but it's normally a simple function of utilization. You can watch most desktop cores jump from 35 watts idle to 130 watts at 100% cpu utilization. The power supply will work harder and also heat up... It's not unlike claims that amps sound better when left on 24x7. We probably could discuss the switching regulators on the ALIX board, I assume the custom linear power supply provides 12Volts to the motherboard, then it's up to the motherboard to provide the various other voltages required.
Of course, it defeats the fun when we let good electrical and software engineering get in the way of the need for people to 'feel an improvement'.
Some enjoy the talks some don't. I've stayed clear from the hired debate on the other threads. How we hear sounds is a little more up for debate than how reliable a transport is.
How about asking for firm scientific evidence that there is a difference? I have certainly never seen any... Has anyone ever done an ABX test, and managed to verify the difference? I've seen this same argument on many forums - and the standard responses promoting differences always have a scattergun approach (essentially a FUD arguemnt) of loosely related computer analogies (i.e. software bugs, software regression, power usage, software/hardware integration or combinations) to justify the idea that there might be a possibility even though no objective evidence supports the concept.
With the correct test gear this is rather simple to verify. ABX tests have aren't a good way to test gear, but that's what the OP did. My only goal was to provide the OP possibilities on why he and his wife were hearing differences. BTW, there's nothing stopping reviewers from crafting a benchmark for various codecs and perform a bit-perfect test on transports. Several reviewers do this for the video codecs.
In fact many of the arguments presented are entirely illogical as they would make computing as we know it impossible. We could never be certain of any behaviour with software. Yet somehow when it comes to audio, software and electronics somehow take on a far less predictable state and all these differences become possible.
Many of the lossless arguments are defeated simply by looking at how a rudimentary zip file works.
Yeah, and we all know zip files don't have bugs either and if your entire counterarguement is to model a zip file... again, look at the complete environment. Various filesystems, 32-bit vs 64-bit OSes, network drives etc all increases the complexity of compression utilities even though the algorithm is fine.
These last few posts i believe i stated two key things:
1) all the lossless codecs should be bit perfect....
2) there are other factors that could make one file format sound better than the other. Fault could be with the listener but there could also be other factors at play.
Nothing FUD about that..