Need a little tech tutorial on SET 120 control amp heat sink size...

0 Members and 1 Guest are viewing this topic. Read 804 times.

DecibleDude

  • Jr. Member
  • Posts: 71
Or mos fet heat sink size in general. I may have asked this question in one form or another in the past, but apparently it didn't take. Back in the seventies I bought a Hafler 100 wpc power amp with mos fet outputs and the heat sinks on it were relatively massive. I have to note that it wasn't the most attractive power amp I'd ever seen, but it sounded good. Anyway, not being an electronics engineer, or electronic repair person, I got from that that mos fets get very hot and need large heat sinks. Now I've noticed that the  the set 120 control amp using mos fets has what I can only describe as rather skimpy heat sinks by comparison. They appear to be quite adequate because when I put my hand over the cover where the heat sinks reside there is minimal heat coming from the vents. In other words despite the diminutive size of the heat sinks, it doesn't seem to be producing much heat. There was of course 40 more wpc on the Hafler than the 120, but not enough to account for this difference. Anyway could some technically inclined person, or perhaps someone from Van Alstine explain to me what accounts for this difference, whether it's design, or the type of mos fets.....what's the magic?

AVASupport

  • Jr. Member
  • Posts: 28
There are a surprising number of considerations that go into heatsink design. These include the acceptable maximum operating temperature of the devices you're trying to cool, how well the package of the device transfers heat to whatever is contacting it, how the device is coupled to the heatsink (a place where you can lose a lot of performance!), how many devices you're using, and the maximum load you are intending to support.

When it comes to the heatsink itself, apparently small things can make a significant difference, such as the fin design and surface treatment.

Then there's the weird reality of production costs. Extrusion dies are as a rule quite expensive. Depending on your product's expected production quantity and the cost of materials (aluminum in this case), it might make more sense to use an over-specified heatsink in a low-power design if it means you can use the fruits of the same die in a larger one.

For the same power output, lateral power MOSFETs do need slightly higher power supply rails compared to BJT amplifiers, which means for the same maximum output power they will need more cooling. Exactly how much more depends on implementation. For amplifiers using source follower outputs, especially impactful is whether the driver stage is powered from higher rails than the output stage (best for thermals) or, as was the case with many early MOSFET amps, lower voltage rails (worst for thermals).

So, given all the variables, it's quite difficult to speculate why Hafler (or anyone else) used the heatsinks they did. But I hope this gives you a better understanding of the complexities involved in heatsink design and why they can vary so much.

DecibleDude

  • Jr. Member
  • Posts: 71
Thank you AVA, so from what you're saying am I right to assume that the 120  driver stage is likely powered from higher rails than the output stage whereas the Hafler was probably from lower voltage rails? Why would they do that, because I noticed back in the day that despite the very large heat sinks the amp could get quite toasty to put it mildly.

AVASupport

  • Jr. Member
  • Posts: 28
Without a schematic and/or careful inspection, it wouldn't be appropriate for me speculate regarding what Hafler did with that design or why your amp ran so hot. My main motivation was to provide some insight into the range of variables that go into heatsink sizing.

A variable I failed to mention in my earlier post is the setup of the output stage itself, the bias in particular. If the bias is high, either by design or because it has drifted or was misconfigured, the output stage will produce a lot of additional heat, even at idle. Opinions vary as to how hot a lateral power MOSFET output stage should be biased. AVA has typically used the point where the MOSFET naturally transitions to a negative thermal coefficient (i.e., is as close to zero temperature coefficient as possible). This results in incredible reliability and performance without generating excessive heat.

The main reason driver stages are often run from lower rails is that it facilitates isolation from the output stage. The varying power demands of an output stage will cause a typical power supply's voltage to modulate. If this modulation is passed on to the front end and driver stages, additional distortion can result. A simple but lossy RC network on the rails is an effective and inexpensive way to provide this isolation. It's not hard to design solutions that provide higher isolated rails to the driver (best for making the most power from the output stage), but it results in greater complexity and cost. So, you need to balance that complexity and cost against the prospect of not squeezing the last bit of power out of the output stage.

In the SET 120, the driver circuits run off the same active regulated supply as the output stage. From a distortion perspective, the tight regulation eliminates the need to isolate the supplies. From a thermal perspective, it's the middle ground.

DecibleDude

  • Jr. Member
  • Posts: 71
 Thanks for such a thorough explanation AVA much appreciated.