top of page

From the Forums: Subwoofer Measurements and Physics

A black and white article. No photos. No pomp. And even less circumstance.


We are able to see the effects of marketing from the forums and their contributors. The resulting satisfaction, frustration, or even suspicion tells us a lot of how well we do our jobs and what products have the highest odds of missing design targets.


Table of Contents



Original Forum Post

"Hi Matt,


I have mentioned this specific issue in several posts, although I try not to belabor it. I like SVS and I don't want to throw them under the bus. I have great confidence in the third-party testing that was done in earlier years by Josh Ricci, and in the in-house measurements of a number of the ID subwoofer makers. That list definitely includes Jeff Permanian, Tom Vodhanel, Nathan of Funk Audio, Cody of Harbottle, Mark Seaton, and David Gage of Deep Sea Audio before he went under. (I may be inadvertently leaving someone off of that list.)


Beyond these individuals, with their serious expertise in subwoofer design, and in proper and credible measurement protocols, I'm afraid that my trust in CEA-2010 measurements is somewhat limited. And, that's even while setting aside the question of whether CEA-2010 measurements really tell us everything we might need or want to know. Frankly, in many cases, I would far rather rely on the specific attributes of the subs themselves, in order to extrapolate their overall capabilities, as opposed to parsing specific SPL at specific frequencies.


Regards,

Mike"


Reply from Cody Hiebert at Harbottle Audio:

"You're correct to monitor your trust in CEA2010. The fact is that it is only one frequency at a time - When is content ever just one frequency at a time? If that is the definitive measure of a speakers capacity, why isn't the whole world that monotone as well? Burst shows the displacement or power limit at any given point - it does not show the System Limit. It also doesn't show the thermal limit which is what a sine sweep does... which does not show burst capabilities. Then we get into pink noise, which is actually pretty good, but the one one that is better is M-Noise for subwoofers but you need to run the AES75 protocol (which is free BTW) or you will blow things to kingdom come. The biggest caveat with noise is that it will not show huge numbers, so the DB race is off and bragging rights are over, and in my opinion, they don't have the signal down pat, but it makes it safer to run. Noise signals have crest factors which need to be carefully calculated and it also happens to mean that you'll need to know the rooms contribution to the fundamental output level... slightly more complicated than in room response because knowing what is responsible for what percentage of the resulting graph will inform how much and what kind of eq goes where.



The major issue with all testing is that it will either not show you what the ego will want to see, or it will show you information that will need to be interpreted by and engineer, or both. For example: temporal distortion. It is only partially seen in THD measures and not the way most people think. THD is amplitude and Temporal is timing: Group delay, Phase shift, Ringing / overhang, Stored energy in enclosure, Port delay (in bass reflex systems), Transient smearing. Then we have a special kind of Temporal distortion that happens in the cone, called Phase Distortion (is not seen in THD or IMD), and this distortion has two types: A) its own special brand of linear phase shift, and B) non-linear asymmetries in the driver behavior. So these nuances are actual system limits that almost no one is measuring or accounting for. How is it to be interpreted by people who don't know what it is when the industry is just now, targeting compression as a bad thing? Well, in all fairness, AES2-2012 has a compression component but its secondary, can be measured during or after power testing, it does not determine the official AES power rating, and transients/group delay/phase/timing isn't addressed at all. I certainly have not seen anything about compression openly advertised. And on top of all of this, you can see that CEA2010 includes the square root of jack of the aforementioned.


In all of this, you can clearly see that there are definite limitations to all testing protocols. The one thing they have in common is the goal to provide a baseline. But they can not include everything, because that is actually impossible. The best thing to use as a consumer is the first 1-2 dB compression scaled from 100 Hz to 10Hz in terms of frequency length. Then if you can't hear anything, push it more until you hear the first sign of bad sound and that will be your own personal tolerance for compression, and no one else's. It is a system level deviation from linear scaling. 1) it is easy to detect if you use a proper mic 2) in upper bass you can hear it "not sound quite right" 3) it happens before other audible things go crazy and if you are getting audible weird sounds, at 1 to 2 dB compression, then it is NOT your driver (unless it's a trash driver that has insane non- linearities) and most likely NOT your amp, congrats you just pinpointed the enclosure. It is such an easy way to determine system linearity, but you need a 1 dB accurate mic.



You are right not to trust the industry standards, no matter how hard the big manufacturers try to defend CEA2010, it is total rubbish and ignores system behaviors beyond the burst."


Additional Remarks

Further to, the fact is that measurements, to be truly neutral, can only happen in a space that is dedicated for the entire frequency bandwidth of the DUT (device under test), and so if the DUT is supposed to go down to 10 Hz, you need a 10 Hz space, which is about 200 ft in every dimension. That's a 40 meter building in all dims PLUS 10 meters on every side for absorption = 60 meters.

For a 5 Hz subwoofer, that would be 90 meters plus 20 meter wedges on each side, or 360 ft total with 65 ft foam on all sides.

For a 3 Hz subwoofer, they would need 150 meters plus 35 meter wedges on each side, for a grand total of 220 meters or 721 ft, or 240 yards. That is more like a aircraft hanger filled with 115 ft foam mountains on all sides.


A supplemental point is that any manufacturer claiming that they measure their subs full band down to single digit frequencies in anechoic is highly suspect of being full of dookie, because to date, I have not seen or heard of a infrasonic capable anechoic chamber that is open to industry professionals. I was even in talks with the NRC in Ontario and when we did the math, it was so far out of reach it was not worth talking about for the reasons above. In fact, the only one I can find is Paradigm Research that claims 6 Hz, however it is a private facility that uses their own proprietary measurement system, and if it can do 6 Hz, that is truly something special.


The very cold and very hard and very real truth is that it is next to impossible to measure a subwoofer anechoically below 20 Hz. So the vast majority of manufacturers have to choose one or a combination of the following:

  • Limit to 20 Hz and calculate the rest.

  • Ground-plane outdoor testing - not practical year round and highly subject to, well, the weather.

  • Large half-space simulation - more practical and widely used.

  • Near-field + mathematical reconstruction - more practical and widely used for all speakers.

  • Pressure chamber methods - not practical at all and I can't find any for audio equipment.

  • FEM/BEM modeling - entirely calculated and therefor lacks manufacturing variances in QC, IE can they build what they model? If they can't get it 100% every time, how will they know what the issue actually is if the basis is calculated?

  • Accelerometer-based mechanical modeling - not bad, but there are limitations.


What do all of these methods have in common? None of them focus on driver linearity proven on the power that runs them in the enclosure that was built for them. So how is one supposed to know the difference between good and bad subs? The answer is surprisingly simple. Look for a stroke guarantee. Stroke is stroke, the diameter of the cone does not matter if the motor can move the mass it is strapped to. This brings us to two very important points; 1) Xmax and why its stupid, and 2) Physics is the final arbiter.


Xmax is Invalid...

...As a consumer metric of "good behavior" and thus as a marketing tool. There are manufacturers that talk about how a small subwoofer can not produce meaningful infrasonic bass, which is nonsense. Because the room size and stroke capacity will determine that all day long. Lets put it this way, the room needs to be designed to an STC (Sound Transmission Class) rating. 50 is code minimum, and for a dedicated audio room 60/65+ is preferred. What this means is that the construction method absolutely matters. So if the room is the correct size, and constructed for sound reinforcement and the volume displaced by the subwoofer fits the needed SPL goals, and the subwoofer can produce that displacement dependably, then there is zero issues. In fact, most of our clients oversize their systems by about 50%, which means they can take a full step down in diameter and still get the same results with headroom to spare.


So why all the hate for Xmax? 1) the magnetic gap doesn't move, it's static. 2) the cone and coil do. 10 Hz is over 100 ft long, so it does not matter how little the driver moves to produce it, the coil still has to dwell on either side of sine for 50 milliseconds, that's 100 milliseconds total per cycle! It's not the amount of power applied, it's the time spent holding that frequency. 100 ms dwell time from an amplifier standpoint starts to look a lot like DC power, not AC and to this point, it is totally appropriate to ask "how well is the coil handling this load?"


So why does the industry measure the resulting movement from the magnetic gap and not the products of the voice coil? Because voice coil behavior under movement/load is hard and expensive to control, and measuring a BL curve and making it look pretty is cheaper and easier, just make a coil that makes the BL curve look good. Simple stuff. Stroke dependability in real world playback does not come from any Xmax test, it comes from managing compression, and for that you need to be able to guarantee stroke. And that is the actual physics based truth.


All Xmax does is prove that the coil can move X distance and produce Y level of THD, which is pointless as per my reply to Mike quoted above, and worse as a marketing tool. Why? Because it guides you to believe that the force applied via the amp is somehow associated with the quality of the sound. What matters is how well the coil moves in reference to its stroke position and regardless of how complex the signal is. This is driver behavior under complex stress, not just a stress.


Lets look at it this way, you can have a 100 ton car blast around Leguna Seca, and you can have a 1/2 ton car do the same. What matters to you as the one driving? The size and weight of the car? Or the cars ability to handle and cross the finish line first? If you want tight bass with no overhang and deadly transients, get the purpose built "car".


The point? Build a room that can handle the bass you want. Then get a subwoofer that guarantees the stroke to get there. This combination is your reassurance, your protection, your safeguard.



Physics is the Final Arbiter

There are manufacturers that also claim that physics doesn’t negotiate box size. Followed by the supposition that the driver size, port area, and excursion set the limits. This is 100% wrong, and if you see this as part of their marketing, run, because they do not know what physics is. What they are invoking is this over-simplified and decontextualized marketing tool is the reality that positions itself as the limits of physics. Convenient, wouldn't you say? Some drivers are ideal for infinite baffle alignments, some for sealed, some for ported, some for bandpass (looking at you car audio). If it was down to stroke, cone size, and port area, there would be no such thing as an anemic subwoofer and this simpletons approach is simply sad.


The physics of driver science, its linearity and its TS parameters absolutely set the box size, its result is called Qts in sealed subs, port tuning (not area) in ported subs*, and radiator tuning in passive radiator subs. Physics even sets the box depth. This is well established physics that dates back to Hoffman (1950's), Toole (1966-current), Theile and Small (1970's). Yes, Dr. Toole established that the subwoofer enclosure needs to be "inert", which begs the question, inert in what ways exactly? The next paragraph will tell you.


Did you know there are even specific physics calculations used to determine at what frequency a subwoofer will start to distort temporally based on the depth of the subwoofer enclosure? There is, and we use it all the time. Friendly tip, if you want 10 Hz or lower, a 12 inch deep box will sound horrific when you actually push and use the stroke, and this is a condition where a "bigger box" (lower Qts) will not cure the disease. This is also why you will never see a 12 inch subwoofer in our line up, ever. The physics of our universe does not support clean sound at that box depth no matter how low the Qts is. And if it does sound good, it is not being pushed. In other words there are displacement limits (current limiters) in place that are preventing stroke from happening. This will neuter the infra-bass dynamic range. Do you want that?


So while Dr. Toole had the "inert" aspect right, we go further to say "the subwoofer system should be inert under complex loading at maximum power applied"


Physics determines everything from how a product is measured (methods and procedures), how many milliseconds the amp power supply needs to hold on for, the displacement capacity of a driver under complex loads, and even the depth of a subwoofer enclosure. In your room, physics determines how much and how smooth your bass will be based on construction technique/quality and seating positions. I think it is pretty clear that physics determines... well... everything.



A Conclusion - No Leaping Required

The truth is that physics will always win in application, determine outcomes, and thwart marketing and supposition.


Measurements are simply a tool to determine if the unit performs as the engineer intended, not the marketing department. By their own nature and limitations of measuring equipment, they are incomplete and because of this, they are widely misunderstood. It will be a glorious day when we can all agree that detailed/nuanced system behavior is for the engineers, the "need to know" stuff is for the room designers, and the marketing should be left to the reality of the end result - Does it hit A SPL at B dB at C Compression at D Distortion?


This narrative is not simple, and it's not supposed to be. Physics is hard when things get "invisible" and that is ok under two conditions; 1) you are willing to submit to the limits of physics, and 2) you trust the scientist.


If you as a consumer are willing to submit to the laws of physics willingly and agree that measurements are only telling you a paragraph out of the novel epic, you will be so much farther ahead by simply knowing that there are going to be limits as to how far and how hard a system, room, and entire design will go, and if you want more you will have to acquire more talent by learning or hiring the right people for the job.


As for the elephant in the room, well, I am here if you want to start building trust in this particular scientist.




Footnotes

*Port aperture sets the port velocity (aka particle velocity) and the length sets the friction of the air, and both affect the tuning frequency. The aperture alone does not perform all functions.

 
 
 
bottom of page