…might as well join the Peace Corps…
– John “Bluto” Blutarsky1
I would hazard a guess that the majority of people bothering to read an article on electronics cooling likely work in the electronics industry. Moreover, it is probably a pretty good bet that a lot of them have actually taken a class on heat transfer at some point. Since most of us eventually graduated, it seems reasonable, therefore, to believe that most of us at some point actually understood fundamental heat transfer analysis well enough to pass a class on the topic.
But thanks to the availability of powerful analysis software, a lot of people have had the luxury of forgetting, or at least ignoring, their hard-won thermal management education. Just fire up the computer, open the finite element modeling tool and generate some CFD (color, for directors). And the analysis tools generally work so well that these people often get away with it and produce sufficiently accurate results. Most commercial tools have been validated well enough that they don’t have too many bugs. As long as the user is entering reasonable inputs the software will usually output reasonable results. Ignoring the two caveats in the preceding sentence is what can occasionally lead to heartache, gnashing of teeth, wearing of sackcloth, and most importantly, panicked redesigns after an initial qual test failure.
These two caveats are not necessarily unrelated, but I will talk about them separately. Let’s start with the second caveat: ‘will usually’. While the vast majority of commercial software works as advertised, versions do sneak out that don’t have all the bugs worked out. Take some comfort if the version number of whatever you’re using is in the double digits; that improves the odds that someone else has already discovered most of the bugs that once existed. But even well-established and respected analysis tools can have undiscovered features, particularly in areas that only a small portion of the users exercise.
In a fairly well-known case a few years ago, the maker of a widely-used finite element analysis tool had implemented an incorrect creep equation for analyzing solder joint fatigue. Eventually, it was recognized that the equation had not been correctly coded – which led to a number of investigations that had to be repeated. A more likely problem than the software getting things wrong is the user getting things wrong with unreasonable inputs. Sometimes it’s a matter of the user doing something silly and the software not being responsible enough to be the grown-up and tell them so.
A few years ago I was in a design review being presented by a very young engineer who proudly showed us his free convection box that featured a lot of horizontal fins. I asked why the fins were oriented that way and he naively asked ‘it makes a difference?’ Apparently not to the software, which allowed the user to select a ‘free convection’ boundary condition option and didn’t bother mentioning that making the plate fins run perpendicular to gravity would have a less than desirable effect on their natural convection performance. Similarly, if things are too easy it can be easy to make a dumb error with units or materials; despite the similarity in their names, alumina and aluminum have very different properties if you choose the wrong option on a pick list (likewise silicon and silicone).
Even when the inputs are correct and the software does what it is supposed to, if you are asking it to solve the wrong problem you can produce some poor results, or at a minimum hurt your credibility. One analysis approach that has made me pull out what little hair I have left is when the analysis of something operating at extremely high altitude is assumed to have no radiation. Granted, radiation is tricky because you need to know the temperatures of nearby equipment to account for radiation exchange between units. In most cases, there will likely be a net heat loss by radiation, so ignoring radiation can provide some measure of conservatism while also making the analysis a lot easier. But when things are operating at a 20 km pressure altitude, there aren’t many ways to remove heat other than with radiation.
In another design review of the previously mentioned horizontal fin box, which happily had been modified by that point to have its fins pointing in the right direction, its performance at altitude was shown. For that analysis, radiation was ignored and the design review showed extremely precise temperatures (reported to 4 or 5 decimal places) that were high enough that the system would have stopped working about 100 degrees earlier because all the components would have de-soldered themselves.
As I recall, the main point of the analysis had been to convince management that a fan was really needed for the design, which I guess is what it did. However, I thought that this was accomplished at a costs of some of the analyst’s credibility.
Because of these potential pitfalls, I am a big believer in doing some level of first-order analysis before, or in parallel with, running analysis tools. By first-order analysis, I mean you should actually dig out your undergraduate heat transfer book, but you probably don’t need to go past the first couple chapters. You can do a lot of sanity checks by just using one-dimensional heat transfer equations for conduction and convection. These sanity checks won’t be sufficiently accurate to replace modeling, but they will give you a starting point and something to compare against more detailed analyses.
The goal is to get into the ball park of the right answer; just getting into the right Zip Code might be good enough. Even if the sanity check doesn’t agree with other analysis, if it is done correctly it should provide insight into understanding the system, such that the physics of the problem can be used to explain why things don’t match. For example, if an initial estimate under-predicts the temperature of a finned heat sink compared to CFD analysis, the difference may be due to your not including fin efficiency or latent heating effects of the air. But if you instead over-predict temperatures with your first order estimate, try to figure out what you missed – or more importantly if you might have missed something in the computer-based analysis.
My main point is that you probably spent at least some time learning a few things in college, so you might as well try to use them once in a while. You usually don’t have to go into a lot of depth or drag out the really complex correlations, many of which are essentially buried into the software tools that you are using. But having an independent method to double check results to see if the results have the right order of magnitude can save you some embarrassment and credibility. A few years ago I was at a status review meeting at which two different excessively educated engineers (i.e. PhD’s) showed results from their detailed CFD analyses. They showed temperatures that were many tens of degrees cooler than my spreadsheet analysis had predicted.
Their results were quite impressive looking and included some cool animations that showed the fluid flow patterns inside and outside the system. But a simple calculation using the surface area and power dissipation showed that, in order to maintain their predicted surface temperature, the convection coefficient had to be about 10-100 times what could reasonably expected with air cooling. It turned out that the analysis was based on a newly released piece of software that, for some reason, did not play well with the symmetry boundary condition that had been used to simplify the analysis.
That review didn’t exactly leave me terribly interested in getting my own copy of that software or with a lot of confidence in any of their subsequent calculations. They could have easily avoided that result by investing thirty seconds into a sanity check.
Keep in mind that in this short column I have used up most of the examples of computer modeling gone wrong that I have encountered over the past two decades. In the vast majority of cases, the analysts and the software perform extremely well. But it only takes one outlier in the distribution of analysis results to create some problems. So I suggest that people make a habit of maintaining some level of skepticism about the infallibility of analysis software and try to put their education to work once in a while.
REFERENCES
1 Fans of the movie “Animal House” will undoubtedly note that I
have edited this quote to make it more suitable for use in Electronics
Cooling® Magazine