这个问题安捷伦有叙述,其实 Reading%+Range% 和 Reading%+Count 是相似的。
What is a "count" and how do I use it to compute accuracy?
http://www.home.agilent.com/agil ... 1000000572:epsg:faq
It became popular in the 1980's to specify instrument accuracy as a percent
of reading plus the "counts" on a specific range. This approach carried over on
some instruments into the early 1990's. The standard now is to report accuracy
specifications as a percent of reading plus percent of scale.
The term "counts" can cause some confusion as to how to compute actual
expected error for a given measurement. The "count" portion of the accuracy
specification equates to what is now reported as percent of scale.
One "count" is defined as a unit change (change of 1) in the last digit of
the meter display. Therefore, one must know the actual numerical engineering
unit represented by this last digit to determine the impact the count error will
have on the accuracy of a reading.
For example, the 1986 Hewlett-Packard catalog specifies the DC voltage
accuracy for the 3457 multimeter on the 3.0V range as 0.0017% of reading + 7
counts (assuming all other required conditions such as warm up time,
temperature, calibration date etc. are satisfied.)
On this range, the last digit on the display represents microvolts.
Therefore, with an input to the meter of precisely 3 volts, the error would
be:
3.000000V*.0017/100 + 7uV = 58uV.
Thus, in this example, the actual reading reported by the 3457 DMM for a 3V
source is specified to be within plus or minus .0019% =(58uV/3.000000V) |