Full Text
Background
Why did the titanic sink? Experts who have studied the disaster, including the ship’s remains that were discovered on the ocean floor in 1985, have concluded that no one single factor is to blame. Instead, they believe it was a series of factors, called an “event cascade”, that caused the Titanic to sink so quickly. The practice of modern medicine would be impossible without the tests performed in the clinical laboratory. Laboratory procedures require an array of complex precision instruments and a variety of automated and electronic equipment which must be accurate and reliable. The purpose of any measurement is to provide information about a quantity of interest - a measurand. No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects [1]. The dispersion of the measured values would relate to how well the measurement is performed. Laboratory errors may be defined as “any defect from ordering tests to reporting results and appropriately interpreting and reacting on these” [2]. Monitoring and controlling of lab errors is very critical to an effective and efficient management of lab activities. Effective management is an Indicator of producing reliable test results.
Clinicians compare most measurement results with reference values and with previous results from the same patient. Results should therefore be reliable and accurate, but in practice they suffer from error. When verifying the performance characteristics of a routine measurement procedure, repeatability experiments are usually performed i.e. replicate measurements of the same sample with conditions kept as constant as possible. If the measuring system is sufficiently sensitive, a range of different results will usually be obtained. Which is the true result for the sample? We obviously can’t say, but clearly the results must contain some error, and the magnitude of error is not the same for the differing results. There is therefore uncertainty as to what the true value is. A dispersion of results is similarly obtained if a patient sample is repeatedly measured under replicate conditions.
Understanding measurement uncertainty (MU)
All types of measurements have some inaccuracy due to bias and imprecision. Quality control (QC) checks are meant to watch and establish precision of the test results. The materials used for quality control should have the same matrix (characteristics) as patient specimens; viscosity, turbidity, composition, colour, minimal vial to vial variability etc. QC checks are usually run at the beginning of each shift, after an instrument is serviced, when the reagent lots are changed, after calibration, and when patient results seem inappropriate.
The imperfection inherent in all measurements is called as “uncertainty”. When we speak of a measurement, we often want to know how reliable it is. We need some way of judging the relative worth of a measurement. Traditionally, there has been the concept of error, but the term ‘error’ implies that the difference between the true value and a test result can be determined and the result corrected, which is rarely the case. In contrast, the more recent concept of measurement is measurement uncertainty which is a non-negative parameter characterizing the dispersion of the values attributed to a measured quantity. Uncertainty of measurement is new to the medical laboratory. It represents the expected variability in a laboratory result if the test is repeated second time. Hence, it is the measure of precision measured in terms of standard deviation (SD). It provides quantitative estimates of the level of confidence that a laboratory has in the analytical precision of test results, and is therefore an essential component of a quality system for medical laboratories.
MU does not estimate error, but provides a quantitative estimate of where the true value of a measured analyte is believed by the laboratory to lie, with a stated confidence level. As such, the term measurement uncertainty tends to give the wrong impression, as it is actually a quantitative indication of the level of confidence, or belief, the laboratory has about the quality of a result. MU is therefore an essential parameter of the reliability of measurement results. Mistakes are a fact of life. It is the response to the error that counts.
Insight into the accountable analytical lab errors
Even though automation, standardization and technological advances have significantly improved the analytical reliability of laboratory tests [3] analytical errors still do occur. Analytical errors are classified into two categories namely, systemic errors and random errors. The total error is the sum of a systemic and a random error. It is reported in percentage (%).
Systemic errors: A systemic error is caused by a defect in the analytical method, by an improperly functioning instrument, time-dependent change in instrument calibration (or) an analyst. It produces a biased value that gives a mean value different from the true value raising concerns on accuracy of test results. Analytical bias refers to the extent to which a measurement, sampling, or analytical method systematically underestimates or overestimates the true value [4].
Random errors or stochastic or precision errors: Random errors are unavoidable errors that can be caused by timing, temperature, or pipetting variations. They occur randomly during the measurement process and are independent of the operator performing the measurement [5]. Running quality control checks would ensure that the subsequent results would be reliable.
Tools for analysing the measurements
Accuracy: How close are the measurements to the true value. The higher the accuracy the lower the error.
Precision: How reproducible are the measurements?
Bias: A systematic error that contributes to the difference between the mean of a large number of test results and an accepted reference value.
Total Allowable Error (TEa): The sum of random error and systematic error that contribute to variation seen in patient results. TEa may also incorporate other sources of error like some pre‐analytical variation, biologic variation, and other factors.
What is traceability?
Property of the result of a measurement or the value of a standard, whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties (ISO 15189, VIM).
A process whereby the indication of a measuring instrument (or a material measure) can be compared with a national or international standard for the measurand in question [6, 7].
What is measurement uncertainty?
ISO 15189 (3.17): The uncertainty of measurement is a parameter associated with the result of a measurement, that characterizes the dispersion of the values that could be reasonably attributed to the measurand. Traceability and uncertainty are fundamental properties of all quantitative measurements.
Why measurement uncertainty?
An estimate of uncertainty provides a quantitative indication of the quality of the measurement result. The reagents, calibrations, operations carried out during the execution of the measurement process, the matrix effects of the sample all of these may contribute to the uncertanity. Thus by measuring the uncertainty we can assess the dispersion of the measured values combining all these factors which influence the measurement result. The uncertainty of the result of a test needs to be taken into account when interpreting those results.
MU is applied to the analytical procedure and not to pre (or) post-analytical errors such as sample suitability, collection, transport and transcription or reporting errors. Also excluded are interfering biological factors such as sex, co-infection with other agents, age, body condition, pregnancy and immunity etc.
How to estimate measurement uncertainty?
There are two main approaches to estimate MU: 1) The “bottom-up” or “components” approach uses a “fish-bone” diagram to identify all sources of uncertainty, in the sense it analyses the process dispersion. The fish –bone diagram was initially drawn by a manufacturing team, later this approach was validated in medical testing by Dimech et al. [8]. The advantage of this approach is that the major sources of uncertainties are clearly identified and weighted individually. The results from Dimech et al. indicated that reagent batch-to-batch, lab-to-lab and operator variation contributed significantly to the total variation whereas reading, volume and temperature contributed to a lesser extent. The disadvantage is that it is a time-consuming process because it requires a complex statistical model and repeated measurement of each component. 2) The “top-down” or “control sample” approach is based upon a statistical evaluation of the test results from samples that have undergone the entire analytical process. It is suitable for medical diagnostic test methods because of the availability of quality control samples, which can be used to monitor whole of procedure performance and directly estimate the combined MU of the test procedure. Thus, it makes use of trueness and reproducibility of the sample. The advantage of this approach is the availability of repeatability data in diagnostic testing laboratories and simple calculations. The disadvantage is that the result is a global MU for the entire procedure and it fails to differentiate between individual contributing components [8].
How is measurement uncertainty calculated?
The mean value and SD is calculated for each level of QC used for a given measurement procedure over a sufficient time to incorporate as many routine procedure changes as possible; at least 30 values be adequate for an initial MU estimate. The parameter of MU is 1 SD (standard measurement uncertainty, symbol μ). Because the SD of the QC reflects the combined effect of all the individual uncertainties arising within the measuring system, the SD can be considered as the combined standard uncertainty (μc) for patients results around the mean value of the particular QC.
Since ±1 SD covers only ~68 % of the dispersion of obtained QC values, the uncertainty is widened by applying a coverage factor (k) to provide an expanded measurement uncertainty (symbol U). Usually k = 2 is chosen, to provide a more useful 95.5 % coverage of the dispersion of results. Assuming such a dispersion also applies to patients results, then a result could be in the form x ± y (95 % confidence), where y = 2 SD (i.e. 2 x μc = U). If several levels of QC are used the MU should be calculated for each, and a judgement made as to whether they are sufficiently different to warrant their use with patient results that fall in the range considered to be covered by each QC level.
Quantitative test results are usually interpreted by comparing the reported value against a reference or clinical decision value, or against a previous test value. For most methods the reference values used for interpretation have been determined or verified using the same method, and therefore uncertainty of measurement is most usefully estimated by the long-term imprecision obtained from in-house routine quality control data, expressed with 95% confidence limits.
Thus, U = 2 * SD (U = measurement uncertainity, SD = standard deviation ) OR
Mean ± 1.96 SD (±1.96 CV%) can be recorded as the uncertainty of measurement estimate.
The guide to the expresssion of uncertainity in measurement (gum) method to identify measurement uncertainty [9]
The GUM presumes the uncertainty in the measurement result can be from more than one source that may affect result: i) Repeatability, ii) Resolution, iii) Reproducibility, iv) Reference standard uncertainty, v) Reference standard stability – Environmental factors, vi) Measurement specific contributors, vii) Alignment, scale, evaporation, mismatch, etc., viii) Contributions required by method - ASTM, ISO/IEC, ix) Military procedure, etc., x) Accreditation requirements.
Standard uncertainty: It is the uncertainty of a measurement expressed as a standard deviation.
Combined uncertainty: It is calculated by squaring all the significant uncertainties, adding them together, and then taking the square root of the sum.
Expanded uncertainty: It is the combined standard uncertainty multiplied by a coverage factor, k.
Coverage factor (k): A numerical factor used as a multiplier of the combined standard uncertainty in order to obtain an expanded uncertainty. The coverage factor is essentially the same as the Z-score or Z-value in statistical terminology.
What is not a measurement uncertainty?
Mistakes made by operators are not measurement uncertainties. They should not be counted as contributing to uncertainty. They should be avoided by working carefully and by checking work. Tolerances are not uncertainties. They are acceptance limits which are chosen for a process or a product. Specifications are not uncertainties. A specification tells you what you can expect from a product. It may be very wide-ranging, including ‘non-technical’ qualities of the item, such as its appearance.
Conclusion
The laboratory should focus to minimize the uncertainties for improving the overall performance. Measurement uncertainty provides quantitative estimates of the level of confidence that a laboratory has in the analytical precision of test results, and is therefore an essential component of a quality system for medical diagnostic testing laboratories. Lab should build measurements by using calibrations which can be traceable to national standards. This delivers particular confidence in measurement traceability if the measurements are quality-assured through a measurement accreditation.
Acknowledgements
Acknowledgements are due to the Departments of Laboratory Medicine and Biochemistry, Krishna Institute of Medical Sciences (KIMS), Secunderabad.
Conflict of interest
The authors declare no conflict of interest.