Analysis and processing of measurement error of measuring instrument

During the measurement process, factors such as the detection method, instrumentation, and environmental or working conditions can lead to slight differences in repeated measurements of the same object under identical conditions. These variations are attributed to measurement errors.

Measurement Methods and Classification of Measurement Errors

Common Representations of Measurement Error

Absolute Error

The absolute error is defined as the difference between the measured value (x) and the true value (xo) of the quantity being measured. Mathematically, it is expressed as:

Δx = x - xo

However, since the true value is typically unknown, the actual value—often derived from a calibrated standard—is used instead. For instance, if a circuit has an actual current of 4.63A and the ammeter reads 4.65A, the absolute error would be +0.02A. The unit of absolute error matches the unit of the quantity being measured.

In many experimental settings, the concept of a correction value (c) is introduced. This value represents the difference between the actual value and the measured value, i.e., c = xo - x = -Δx. The correction value is equal in magnitude but opposite in sign to the absolute error. When applied, the sum of the measured value and the correction value gives the actual value.

Example 2-5: A pressure gauge measures a pressure of 1000.2 N/m², while the system accurately measures 1000.5 N/m². The correction value for the pressure gauge is calculated as:

c = 1000.5 - 1000.2 = 0.3 N/m²

It’s important to distinguish between error and deviation: error refers to the difference between a measured value and the true value, whereas deviation is the difference between a measured value and the average of a set of data. Although these terms are sometimes used interchangeably, they have distinct meanings.

Relative Error

The relative error is the ratio of the absolute error (Δx) to the actual value (x), expressed as a percentage:

γ = (Δx / x) × 100%

While theoretically, the relative error uses the true value, in practice, the actual value is often used instead. Relative error is dimensionless and provides a better way to compare the accuracy of different measurements. For example, measuring 10A with an absolute error of 1mA results in a relative error of 0.01%, while measuring 100mA with the same absolute error yields a relative error of 1%. This highlights the importance of using relative error when comparing different measurement scenarios.

Reference Error

The reference error is the ratio of the absolute error (Δx) to the full-scale range (L) of the measurement system, usually expressed as a percentage:

ε = (Δx / L) × 100%

Unlike relative error, which uses the actual value, reference error uses the system's full scale. This makes it more convenient for evaluating the performance of instruments across their entire range. However, the reference error may vary depending on the measured value within the range, so the maximum reference error is often considered to represent the worst-case scenario.

Maximum Reference Error

The maximum reference error is the largest absolute error divided by the full-scale range, expressed as a percentage. It is commonly referred to as the basic error of the system and serves as a key indicator of the system's accuracy.

Classification of Error Causes and Nature

Systematic Error

Systematic errors occur consistently during measurements and follow predictable patterns. They arise from instrument inaccuracies, incorrect measurement methods, or environmental influences. These errors are typically constant or follow a known trend and can often be corrected through calibration, improved procedures, or the use of compensation techniques.

In error theory, the term "accuracy" is often used to describe the magnitude of systematic errors. Systematic errors reflect how far a measurement deviates from the true value.

Random Error

Under identical measurement conditions, random errors cause variations in the measured values, with no clear pattern. These errors result from unpredictable factors such as temperature fluctuations, electrical noise, or minor mechanical vibrations. While individual random errors cannot be eliminated, their statistical behavior allows for estimation using probability theory.

In error theory, "precision" is used to describe the magnitude of random errors. The smaller the random error, the higher the precision of the measurement.

Gross Error

Gross errors are large deviations from the true value, often caused by human mistakes, equipment malfunctions, or improper procedures. These errors are not consistent and can significantly affect the reliability of the data.

It is crucial to identify and eliminate gross errors during data analysis. If a measurement contains a gross error, the corresponding data should be discarded to ensure the integrity of the results.

Finally, it's worth noting that the three types of errors—systematic, random, and gross—can sometimes overlap or transform into one another. As our understanding of measurement systems improves, some previously classified random errors may be reclassified as systematic, and vice versa. Instruments equipped with correction and compensation mechanisms can significantly reduce the impact of systematic errors, leaving only random errors, which can then be analyzed statistically to estimate the overall uncertainty of the measurement.

Cotton Stick

Cotton Stick,Color Cotton Buds,Stick Cotton Buds,Cotton Swab

COTTONWHISPER (TAIZHOU) DAILY PRODUCTS CO.,LTD , https://www.cottonwhipershop.com

Posted on