In coming across and reading an older article on a Bryston 4B NRB reviewed and tested by Stereophile, I noticed that there was a difference between the maximum continuous output ratings between 1ch and 2ch.
Normally, this would seem to be logical, as it should be easier for an amplier's power supply to drive a single channel then stereo. But if the transformer and power supply is optimized to ensure that sufficient power is always available to the output devices to allow them to operate at their maximum continuous levels, then shouldn't there be minimal difference between the 1ch vs. 2ch maxiumum continuous output ratings?
In other words, if a given transformer/power supply can provide all the power the output devices can use - then if 1ch is driven to its maximum output then would not 2ch just multiply that output by 2, exactly? In other words, should not an optimal design ensure that the output devices would never lack power supply up to their maximum output tolerances?