I wonder if this is "our" Bob Smith being referred to to on the Geddes website:
Ha! You guys need to do your homework. Well...I guess you'd have to be a speaker geek like me to know this stuff. Anyway - I'm honored to think that you'd even consider me to be the author of the phase-plug technology referred to by Earl, but...my ity-bitty brain ain't that smart.

I've read Bob's paper years ago and based our waveguides on some of the concepts presented in his paper. Namely - the very need for phase-plugs is best to be avoided. If one wants to design a true, full-bandwidth horn then one hasn't much choice but to use a phase-plug. Without it you'll get a huge suck-out somewhere above about 8KHz.
The phase-plug brings together all the different parts of the wavefront emanating from the diaphragm such that they arrive at the throat of the horn in-phase. At the high compression ratios experienced by the wavefront in true horns, even very small path length differences (that result from the wave being launched from a curved diaphragm) turn into large phase differences if some type of device isn't used to keep the effective paths equal.
The down side is, as Earl points out, that the phase-plug (of the other "Bob Smith's" design) introduces turbulence - hence, distortion. And the distortion gets worse as the frequency increases. Now supposedly Earl has a phase-plug design that eliminates the turbulence. If so, that would be a good thing for horns.
But Earl's not telling you that distortion also results from the high compression ratios, apart from the phase-plug turbulence issue. And it isn't just a problem with high frequency horns either.
Heck, I remember a nomogram in an old speaker design book from years gone by that was to be used to calculate the distortion generated by a given bass horn design. I forget how it worked but if I remember correctly, you pick an upper frequency cut-off for the horn and a certain low frequency cut-off, then a taper (exponential, etc.) and then a power level, then using a ruler you draw a line through the points and it will tell you what the distortion is at any given frequency between the cut-off points. ...Or something like that. The main thing one noticed is that for a given design, the distortion always gets worse as the operating frequency is increased
Anyway, the point is that distortion increases as compression ratios increase and/or power levels increase and/or frequency is increased - assuming the acoustic "gain" is constant (i.e., "broad-band" as in a true horn). That's why I don't design broad-band horns. The whole phase-plug issue is very touchy and small variations in geometry makes a huge difference. Designing ones own phase-plug is no small issue and you'd better have precision manufacturing equipment to make them. Too much headache for this guy.
And what's the point? Every engineer knows that the actual power content of music drops off fairly quickly as frequency is increased. Above about 5KHz there's very little energy (from a dissipated power standpoint), so why engineer in all that gain? You sure don't need it from a tweeter power handling standpoint. So unless you NEED a tweeter with a 100dB+ @ 1W/1M sensitivity to begin with, why go through all the heroics and increase HF distortion on top of it? Our waveguides have high gain at the low frequency end of the tweeter's range - where it's needed for increased power handling. The gain then gradually decreases as you go up the spectrum until you get to about 5KHz - where then there's no added gain and the tweeter's SPL output level is the same as if it was mounted on a flat baffle. So right where distortion would start to really become noticeable, we have zero increased compression and therefore the same distortion performance as the tweeter mounted in a conventional box. Sort of the "best of both worlds" - if you ask me.

Uh...I'm not the Bob Smith Geddes was referring to - sorry about that.

-Bob "the other" Smith