Around 1980, when I joined the Heat Transfer Group at Philips CFT, my boss threw a piece of material on my desk and asked for its thermal conductivity. It took me a week digging in literature to discover that this was no easy question to answer, despite the simple mathematics behind a steady state measurement: k=q/∆T/A, with k the thermal conductivity, q the dissipation, ∆T the temperature difference across the test sample, and A the area. All items at the right hand side are easy to measure, so it seemed.
This is the bottom line: to get high accuracy (let’s say better than 5%) you need very expensive equipment (between $200,000 and $500,000). But be aware, even after buying such a tester, there is no guarantee that in practice the value obtained is the value you are looking for, because many materials of interest are anisotropic.
Why is the measurement of thermal conductivity so difficult: orders of magnitude when compared to measuring electrical conductivity, for example? Because the simple equation is only valid for 1D heat conduction, the problem being that it is extremely difficult to realize 1D conduction in practice. Especially when you have small samples, and even more when you are dealing with higher temperatures due to increasing radiation losses. This is one of the reasons why transient tests became popular, despite their much more difficult mathematics. Simply because these tests measure locally, not globally. However, when talking accuracy, we face the problem of interpretation. Are the physics the same for steady state and transient data, you may wonder? At a conference in Manchester on thermal conductivity in 1986, I encountered, to my surprise, almost a real fight between believers in steady state and transient regarding the acceptation of laser diffusivity tests as a standard test method, the argument being the physics, especially for materials that are to a certain degree transparent for radiation in a certain wavelength band.
Another problem hampering high accuracy is the lack of reference materials accepted by NIST (National Institute of Science and Technology) in the range that is of interest to electronics cooling. Except for some building materials, “official’’ materials are Armco iron and graphite, unofficial the ceramic Pyroceram 9606. Why so few, you may ask? It is not an easy task, and very expensive too, to select a reference material. The material should be stored for hundreds of years, and should stay homogeneous during this period, in time and in space. Fortunately, other standard labs have more reference materials in stock, such as NPL (National Physical Laboratory) in England and JRC (Joint Research Centre) in Belgium [1].
The consequence of the foregoing is that, while the measurement accuracy by individual researchers is claimed to be of the order of 2%, laboratories participating in round-robin tests produce results differing from each other by 15% or more [2]. Even the measurement of pure metals is no exception to this rule. About 50 years ago, the published values of e.g. nickel and tungsten varied over an order of magnitude, and the rumor goes that a wrong value for tungsten used in the calculation of the proper thickness for the thermal shield of early space vehicles led to fatal problems during re-entry. The large discrepancies found in “early” literature could possibly be attributed to small impurity levels, for which the thermal conductivity is very sensitive.
On top of this, some frequently used materials exhibit a large temperature dependence, such as silicon. For example, using the value for Si at room temperature while calculating the thermal behavior of a package at operational temperature can easily result in an error of 20%.
Taking into account the above, one may indeed start to wonder what the accuracy of the vendor-published data is for engineering materials: plastics, ceramics, composites. These materials often exhibit anisotropy (e.g. FR4 shows a factor of 3 difference between in-plane and out-of-plane), and most standard tests only measure in one direction. In other words, because 3D conduction heat transfer dominates in practice, even a very accurate value obtained from a standard lab does not guarantee accuracy in a practical application. A special category is formed by thermal interface materials. While 1D heat conduction always predominates, the contact resistances play an important role, and hence the pressure under which the tests are performed. Especially for high-performance materials, quoting the thermal conductivity gives a false impression of their behavior in practice [3].
In conclusion, when accurate predictions of temperature are the objective of the analysis, it is recommended to perform measurements in situ, by imposing a variety of well-known (hard) boundary conditions (hence refrain from natural or forced convection!) to the object under study, and fitting the unknowns of the object (usually the thermal interface material and the thermal conductivity) to match the measured temperatures. If this is not possible, the best guess can be obtained by consulting the Tech Data columns in this magazine from 1997 to 2009, see for a summary reference [4].
References
[1] http://www.evitherm.org/default.asp?ID=969
[2] Hulstrom L., Tye R., Smith S., “Round robin testing of thermal conductivity
reference materials,” Thermal Conductivity, Vol. 19, Plenum Press, 1988,
pp. 199-211.
[3] Lasance, C., Murray, C., Saums, D., Rencz, M., “Challenges in Thermal Interface Material Testing,” Proceedings of SEMI-THERM 22 Conference, Dallas, Texas, March 15-17, 2006.
[4] Wilson J., Technical Data Summary, ElectronicsCooling, August 2009. l