|
|
ProCooling Testing Methods
|
Date Posted: Jun 12 2004
|
Author: pHaestus
|
|
|
Posting Type: Article
|
Category: FAQ's, Editorials, Q&A's
|
Page: 1 of 1
|
Article Rank: from 2 Readers
Must Log In to Rank This Article
|
|
|
Note: This is a legacy article, imported from old code. Due to this some items on the page may not function as expected. Links, Colors, and some images may not be set correctly.
|
|
ProCooling Testing Methods By: pHaestus
|
|
|
|
ProCooling's Waterblock Test Methods and Objectives By: pHaestus June 12, 2004
|
|
|
Before beginning any project, goals and objectives should be clearly defined. Otherwise a lot of money and effort can be spent without getting the desired outcomes. This is particularly true with putting together a test setup for waterblocks; performance of the top blocks is close enough to require fairly sophisticated equipment to differentiate. In my case, test results need to be of high enough quality to answer two questions:
(1) How do relatively small changes in waterblock designs affect final performance?
(2) How do a large number of waterblocks perform relative to one another?
These questions are due to the fact that ProCooling's audience is mostly serious water cooling enthusiasts and DIY/small scale waterblock builders. Accordingly, I think the ProCooling review process differs from what one will find elsewhere because of extensive discussion on how waterblock design choices affect performance.
Keep in mind that the key issue in the above questions is performance. The majority of waterblock reviews use a "system testing" approach where the same pump, radiator, and tubing are used to test different waterblocks. That may work reasonably well for that individual system, but as soon as you change any of the components (perhaps adding a GPU block or changing pump) then the relative performance of the waterblocks being tested may change (due to differences in loop resistance). Since it is unlikely that end users will have EXACTLY the same cooling loop as a reviewer, it is far more useful to determine how well the waterblock will perform in many different scenarios.
The guiding philosophy for my testing is:
1) I want to be sure I publish performance results that represent the best I could achieve with the waterblock while testing it. I have a strategy (to be described below) for discarding bad mounts and repeat testing when I get that odd "good" mount until that performance is the norm.
2) I want to make sure my results are reproducible enough to be able to test the block again in several months and get results that are statistically the same. This involves regular testing and calibration of the diode reader, going back and rerunning a standard waterblock (one I have fiddled with for many months off and on) when things look funky, and generally just being careful. With a PC being still used for testing this is HARD to accomplish.
3) I want to be certain that any issues with waterblock mounting and block usability are exposed in my testing so that consumers do not buy a waterblock that performs excellently on a die simulator but poorly (due to hassles) in real systems. This is, in fact, one of the things I worry most about when making buying recommendations to friends. You will see that the goals and my testing philosophy have a large effect on the equipment and methods I have chosen. Other testers will have different priorities and may make somewhat different choices.
|
Accuracy, Resolution, and Reproducibility
|
|
This is an important issue and perhaps the one least understood by the average reader of a hardware website. All measurements have error (random or otherwise), and that means that there is always some uncertainty associated with any measurement.
|
|
The magnitude of the uncertainty will depend upon the quality of the test equipment and the quality of the experimental design. Three definitions should be considered when examining test equipment: accuracy, resolution, and repeatability. Accuracyis a measure of closeness to the true value guaranteed by the manufacturer; it is possible to calibrate instruments to account for their deviation from the actual value. Accuracy of better than 0.3C out of the box is expensive. It is necessary to regularly have instruments calibrated to maintain their accuracy. In fact I think it's fair to say that accuracy is the product of resolution and calibration. Resolution refers to the smallest unit that the instrument is capable of distinguishing between; 0.1C or 0.01C resolution is typical for test instruments. Repeatability is a term that is synonymous with precision; how closely do repeated measurements come to one another? RepeatabilityMeasurement repeatability can be affected by many factors but probe placement is one of and resiliency are the most important for the type of testing that we do. Let's look at these terms in a little more detail by comparing two tests that are run with (i) a dual Compunurse and (ii) a Fluke 2190A Thermometer w/ type T thermocouples.
The Compunurse uses a thin flat thermistor to make temperature readings and is commonly used in reviews to measure water temperatures (if you dunk it in water with silicone goop to waterproof it) and CPU temperature (affix it to the side of the core as closely as possible). The Compunurse has an accuracy of +/-3C or so, a resolution of 0.1C. The Fluke 2190A is a thermocouple reader; with Type T thermocouples it is a good inexpensive test instrument. The accuracy of the Fluke 2190A is +/- 0.3C, its resolution is also 0.1C. If we use these two instruments to measure the difference between two temperatures (say water temperature at a waterblock's inlet and the CPU temperature at the side of the core), then the uncertainty of this temperature differential for the Compunurse would be: [3^2 + 3^2]^1/2 or +/-4.2C! For the Fluke, the uncertainty is [0.3^2+0.3^2]^1/2 or +/-0.42C. This means that if two people doing exactly the same experiment (probably not possible with CPUs and motherboards) with Compunurses are no more than 8.4C apart in their final dT result then they are operating within their instrument's specification. For Fluke testers a deviation of more than 0.84C means something is wrong. THIS is why fairly sophisticated test equipment is needed; we are dealing with differences smaller than even the Fluke's 0.84C uncertainty.
|
|
BrianS256 on this topic: With a public review, credibility is always in question, and the correctness of the data must be beyond question. ProCooling must be able to answer the question "how good is the data?" It turns out that the answer is more complicated than many people think.
All data can be called into question, which is why we have to be so careful with numbers. Most people have heard the Samuel Clemens (aka Mark Twain) quote, "There are lies, damned lies and statistics." Oh how true! Since we have the opposite goal (not lying), we have to carefully explain how to understand the numbers without (inadvertently) believing a lie. Honesty is more than just telling no lies; it means you make an effort to ensure that your audience clearly and unambiguously understands the truth. Lying by omission or by being vague is not an option.
It all starts with resolution. An instrument resolution is the finest level of detail that it reports. For example, the Digitec 5810 HTs have a 0.01C resolution. This means that it can "see" a difference of temperature of one-hundredth of a degree Celsius. However, all of us have seen how a readout can fluctuate between one number and another or even read differently depending upon the time of day, humidity of the air, phase of the moon, etc... This leads us to repeatability.
Repeatability tells us how stable a measurement is when other "things" change. Electronics heat up or cool down, connections get jostled, input voltage from the wall socket changes, electronic parts age and change value, and many other factors conspire to keep measurement readouts of even a constant value from staying the same. To remain stable, extra circuitry is added for thermal, voltage, and component value compensation. Higher quality parts are also used to minimize these the effects, manufacturing is often done by better trained personnel, and more quality control is enforced. of background thermal noise, too. All this (and more) effort adds to the expense of a high-end instrument. But... a stable measurement is still not necessarily accurate.
Most ProCooling readers will remember the accuracy problems seen with thermistors in the CPU sockets. They were repeatable (within the limits of their admittedly poor accuracy), but they just weren't right. By the same token, even expensive measurement instruments with high resolution and excellent repeatability can be wrong. This is where the instrument vendors (and some third party companies) make money even after they sell a $5000 temperature probe: certified calibration.
With a high-end instrument, the manufacturer typically calibrates it by comparing it to a known "correct" instrument of higher precision and accuracy. So, that $5,000 instrument may be calibrated by a bench setup worth $20,000. Then, that $20,000 instrument is calibrated and certified against a more expensive setup, and up it goes.
Each instrument must be validated across the full range and some sort of compensation must be added to make the readouts correct. Usually, this is a table of interpolated results. A real calibration also verifies that repeatability is within specification, and the result is an instrument that you can trust.
All this leads up to the final result of trust. You should be able to trust the number exactly up to and no further than the repeatability and accuracy of the instruments being used.
|
|
|
|
I deal with my need for small uncertainties in a manner that might be of interest to some of you. My water probes and CPU diode are all cross-calibrated to one another by using a water bath over a large temperature range (5 to 55C). I recorded the offsets necessary to make all three temperature probes read equally in this water bath and can adjust the raw data from all the thermometers in all my tests. This means that the YSI water temperature probes will report the same number (to 0.01C resolution) and for the CPU diode that means it will be within 0.125C of the same reading of the YSI probes (its resolution) if temperatures are the same. This makes the accuracy of the dT readings that I report MUCH better than the stated accuracy of the instruments I am using
|
|
My testing methods have evolved slightly over the 6 months I have been doing waterblock testing. Fundamentally, though, the principles have remained the same. My testing is a two part process:
(1) Repeated waterblock mountings at 1.50 GPM flow rate. This testing is done to verify that my test results are reproducible and representative of the waterblock's performance when properly installed. This testing is also very useful in identifying problems with waterblock mounting and ease of use. The following procedure is used:
- CPU and waterblock baseplate are cleaned of residual thermal paste using 99% ethanol and a lint free linen.
- A thin layer of thermal paste is applied to CPU. I use a special "quick settling" formulation of thermal paste provided kindly by Arctic Silver.
- Waterblock is remounted carefully, hoses are propped up so that they are perpendicular to motherboard and level, and PC is powered on.
- Flow rate is adjusted if necessary to 1.50 GPM.
- CPUBurn is run at high priority for 60 minutes and then temperatures are monitored for one minute. CPU diode temperature is logged using the Maxim software (2Hz sampling rate), water inlet and outlet temperatures are recorded at 10 second intervals
This procedure has been used throughout all testing, but my thoughts on how to best characterize variability and reproducibility have somewhat changed. My initial philosophy was to simply repeat the above tests 10 times and record an average and standard deviation. In principle, this works well enough. I also found it sensible to throw away anomalously bad results (1-2C higher than normal) as being due to user error in mounting. What I found in some cases, though, was that a single test run would give anomalously good results (that PERFECT mount is a bit like the fish that got away).
(2) Waterblock testing as a function of flow rate at 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, and 2.5 GPM to describe how waterblock performance changes with flow rate. Here are the test results from waterblocks we have tested here so far:
Checks on Test Results
It's a good idea to have a "standard" waterblock that you can return to in the event that there are changes in your setup or if you run into problems. In my case I use an aluminum top DTek Whitewater for this purpose. It's also possible to have some internal checks within your testing. For example, the CPU should always generate the same power at a given multiplier, FSB, and VCore. You can make an estimate of the CPU's power (in W) from the following equation:
W = [WB outlet temp (C) - WB inlet temp (C)] * flow rate (L/min) * 69.767
This number should be fairly constant across all your tests unless you change CPU frequency or voltage.
|
|
Here is a little diagram of what the test loop looks like at this moment:
It is a lot easier to see what's going on with a schematic like that than with the actual system which is a mess of wires and hose.
The results that come out of the test loop are compiled in a spreadsheet for later analysis. The difference in temperature (dT) between the CPU diode and the water inlet temperature is the main value that is reported in our waterblock reviews as this term characterizes how well that a waterblock is cooling the CPU. This dT value is affected by the flow rate of the solution and so graphs of dT vs. flow rate are required to fully describe a waterblock's performance (no single point will do it).
Digitec 5810 HT thermometers w/ YSI 700 series probes: The Digitec 5810 HTs are guaranteed 0.3C accuracy and 0.01C resolution. This isn't bad by itself, but the repeatability of their measurements is far better, and I have two 703A probes for immersion in water that read exactly 0.02C apart from one another over a large temperature range. This makes measuring dT across the waterblock or radiators (useful for calculating W) very accurate. These are fairly inexpensive thermometers (I now have 3) and it's possible to find NIB thermistors on ebay from time to time with NIST traceable certs. I am quite partial to my Digitecs because they "just work" and are very linear. The only drawback I can see with them is that there is no small diameter 700 series probe for using with a die simulator or in a waterblock baseplate. I use all three thermometers when testing: one for waterblock inlet water temperature, one for waterblock outlet water temperature, and one for radiator intake air temperature.
GPI flowmeter: I am using a Great Plains turbine style flowmeter model number xxx. It has 0.01GPM resolution and 1% accuracy from 0.5-3 GPM. This unit has been very reliable but does not have a digital or analog output for automation and it does introduce a fairly substantial pressure drop. This makes it difficult for me to generate performance numbers above 2.5 GPM in many cases.
Maxim 6655 EVSYS diode reader: For CPU diode readings I use a Maxim MAX6655 EVSYS diode reader system. This package was chosen simply because it allows me to get 0.125C resolution from the diode reader; when connecting Maxim ICs to motherboards and using MBM or Speedfan all I could manage was 1C. The MAX6655 EVSYS is a two piece system: one PCB for the 6655 diode reader and one for a SMBus to parallel port converter. I use a separate notebook PC to log diode readings from this unit
.Modified AMD 1700+ (TBredB) CPU: The CPU diode modifications are actually fairly unique in our community. I soldered copper twisted pair (from CAT5E cable) to the base of the CPU's diode pins (S-7 and U-7). The CPU used was a 1700+ TBredB (JIUHB 0320), and this rather drastic measure was taken so that I could then calibrate the complete diode/diode reader system by putting the CPU in saran wrap and dropping it into a water bath. Water temperatures were measured by both a YSI 703A probe and the MAX6655 (both inside the Saran Wrap) and a calibration curve was constructed by changing water temperature with either boiling water or ice. It was found that the CPU diode reads 0.428 C higher than the YSI 703A probe used for the waterblock inlet over a wide range of temperature. This is why 0.428C is subtracted from all raw diode readings in my testing.
Test PC: Epox 8K3A+ motherboard that has been modified so that diode pins are not connected to its onboard diode reading circuit, 128mb generic ram (at 2.8V), 12GB 5400rpm HDD, Windows XP Professional SP1, Leadtek GF3 Ti200 video card. A 40mm Sunon fan was added to stock passive HS/fan to assist in northbridge cooling. Power supply is a 350W Enermax PSU; PSU voltages are now monitored w/ maxim EVSYS.
This section is in fact undergoing a huge upgrade at the moment. I have purchased a recirculating water chiller, a Rosemount differential pressure transmitter and both a HP 5.5 digit DMM and scanner. A Fluke RTD thermometer has been donated, and a die simulator is being built as well. Most importantly, the entire test procedure is being automated with LabView. This upgrade is expected to be completed by late summer/early fall. When it is completed I should be able to more or less test 24/7 and just remount waterblocks two or three times per day. These purchases were made possible by the generosity of our forum members (you guys rock!).
A Word on Automation
Automation of test equipment and digital logging can make your live easier in several ways. First of all it's a lot more convenient to just dump instrument readings to a text file than it is to constantly monitor the display of an instrument and manually enter the readouts. It also makes it possible to collect far more points and get a more statistically meaningful average and standard deviation. One can also look at temperature changes in log files over time and determine if there was some deviation in the test setup (perhaps your central air kicked on or someone opened a window in winter) or if steady state conditions have indeed been reached. Finally, it's important to keep backups of unaltered, timestamped data logs if at all possible. Then, when some wacko calls your testing biased (hey the internet's a scary place), you can simply whip out your raw data as needed (external #2 in logs is diode temp), call them a libelous buffoon, and emerge with credibility unscathed.
|
|
|
|
Random Forum Pic |
|
From Thread: Need help from the Watergods... |
| ProCooling Poll: |
So why the hell not? |
I agree!
|
67%
|
What?
|
17%
|
Hell NO!
|
0%
|
Worst Poll Ever.
|
17%
|
Total Votes:18Please Login to Vote!
|
|