www.ti.com
TMP007
SBOS685C – APRIL 2014 – REVISED JULY 2015
8.2.2.2.1 Narrow-Range Calibration
To begin calibration, select an object temperature (TOBJ) and a value for the die temperature (TDIE). With these
system temperatures stable, take a statistically significant number of samples of VSensor (results shown in register
00h).
in this example, 64 samples were taken. Do not use the object temperature readings given in register 03h; these
values are invalid before calibration.
To compensate for first order drift in system temperatures, it is often useful to normalize the data set. For this
purpose, for each temperature set, the sensor voltage data (given in register 00h) is normalized by first finding
the best fit line of the form shown in Equation 15:
Sensor (mV) = a ´ SampleNo + b
(15)
The normalized data for each data set is then calculated as shown in Equation 16:
SensorNORM (mV) = SensorMEAS - (a ´ SampleNo + b)
(16)
The normalized data, VSensor_norm, is centered on zero mean, and is first-order corrected for long-term drift. The
standard deviation for each data set is then calculated to estimate the sensor noise, σ. Verify that the data are
limited by white noise and no other effects. For a sensor-noise-limited data set, vSENSOR σ is typically < 1 μV, and
preferably < 0.5 μV after first-order correction for drift, as described previously. If this condition is not satisfied,
then the calibration accuracy is limited by external system factors (for example, convection or conduction).
Repeat this process for each combination of TOBJ and TDIE for which the calibration is to be performed. The
normalized data are used only for evaluating the suitability of the data set for calibration, and not for the actual
calibration itself.
For calibration, the mean value, <VSENSOR>, is calculated for each combination of TOBJ and TDIE, as shown in
Table 15. Using the mean value minimizes error introduced by random noise. Based on the means, a set of
coefficients is generated based on a user-selected optimization criteria for Equation 7. Common criteria are
minimizing the maximum error, minimizing the average error, and so on. For a detailed discussion of optimization
methods, see SBOU142 — TMP007 Calibration Guide.
Table 15. Mean Values
TDIE
(°C) 33°C
34°C
35°C
36°C
36.5°C
TOBJ
37°C
37.5°C
38°C
38.5°C
39°C
40°C
41°C
25 <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR> <VSENSOR>
30
<VSENSOR>
<VSENSOR>
<VSENSOR>
<VSENSOR>
<VSENSOR>
<VSENSOR>
8.2.2.2.2 Verifying the Calibration
The next step is to use the generated coefficients to verify the calibration, and determine the accuracy of the
system. For common calibration (C) , the same coefficients are used for all devices; in unit calibration (U) the
coefficients are calculated for each device. Common calibration includes device-to-device variation, and thus is
less accurate, but much easier to implement. Unit calibration is more accurate, and eliminates device variation,
but requires more effort to implement. The choice depends on the application requirements for accuracy versus
implementation effort.
Mean calibration error at each point is defined as shown in Equation 17:
å EMEAN
=
1
N
N
1 (TOBJ_PREDICT - TOBJ_ACTUAL )
where
• TOBJ_PREDICT is the temperature based on the calibration coefficients.
• TOBJ_ACTUAL is the known object temperature, measured independently.
• N is the number of devices in the calibration set.
(17)
The mean error graph (see Figure 54) provides an efficient method of understanding how the systematic errors
vary across the temperature ranges of interest. This graph also provides a means of weighing the benefits and
efforts of common versus unit calibration for a particular application.
Copyright © 2014–2015, Texas Instruments Incorporated
Product Folder Links: TMP007
Submit Documentation Feedback
41