Critical Calibration Parameters for Process Instrumentation

Many calibration specialists use processes that have been in place for a long time and have not developed with instrumentation technology. It used to be impossible to maintain a performance criterion of 1% of span, but today’s instrumentation may easily exceed that level on an annual basis. In certain cases, technicians are still employing outdated test equipment that doesn’t match modern technology requirements.

Calibration uncertainties and why is it important for technicians to understand it?

This article focuses on establishing baseline performance testing, which allows for the study and adjustment of calibration parameters (primarily tolerances, intervals, and test point schemes) to achieve optimal performance. Regulatory, safety, quality, efficiency, downtime, and other essential aspects will be reviewed as well. A thorough grasp of these variables will aid in making the best judgments about the calibration of plant process instrumentation and the improvement of obsolete methods.

Calibration Basics

It is critical to incorporate the model discrepancy term while calibrating model parameters in order to capture missing physics in simulation, which can be caused by numerical, measurement, and modelling mistakes. Ignoring the discrepancy could result in skewed calibration parameters and forecasts, even as the number of observations grows. When the simulation model contains a model error and/or numerical error but only a small number of observations are available, this research proposes a simple yet efficient calibration method based on sensitive information. The sensitivity-based calibration method captures the trend of observation data by matching the slope of simulation forecasts and observations at various designs and then compensating for the model mismatch with a constant value.

image

How does calibration help to improve plant sustainability?

In terms of parameter estimation and model prediction accuracies, the sensitivity-based calibration technique is compared to the conventional least-squares calibration method and the Bayesian calibration method. The calibration process of these three methods is demonstrated using a cantilever beam and a honeycomb tube crush example.

The sensitivity-based strategy, it turns out, performs similarly to the Bayesian calibration method and outperforms the conventional method in parameter estimation and prediction accuracy.

How to choose a calibration laboratory - things to consider

How to do calibration on various devices?

Here we are trying to explain the calibration process of various devices

Linear Instruments

The so-called zero-and-span approach is the most basic calibration procedure for an analogue, linear device. This is how it’s done:

  • Wait for the instrument to stabilise after applying the lower-range value stimulation.
  • Adjust the “zero” setting until the instrument registers correctly.
  • Wait for the instrument to stabilise after applying the upper-range value stimulation.
  • Adjust the “span” knob until the instrument registers correctly at this point.
  • To attain good accuracy at both ends of the range, repeat steps 1 through 4 as needed.

Checking the instrument’s reaction at various locations between the lower and upper range values is an improvement over this rudimentary technique.

Trimming — the process of calibrating a “smart” digital transmitter - is a little different. The “low” and “high” trim functions of a digital instrument are normally non-interactive, unlike the zero and span adjustments of an analogue instrument.

This means that during a calibration procedure, the low- and high-level stimuli should only be used once. The four general processes for trimming the sensor of a “smart” instrument are as follows:

  • Wait for the instrument to stabilise after applying the lower-range value stimulation.
  • Use the “low” sensor trim option to get the best results.
  • Wait for the instrument to stabilise after applying the upper-range value stimulation.
  • Use the “high” sensor trim option to get the best results.

Trimming a “smart” instrument’s output (Digital-to-Analog Converter, or DAC) follows the same six broad steps:

  • Use the “low” output trim test function to see whether your output is too low.
  • With a precise milliammeter, measure the output signal and record the value once it has stabilised.
  • When the instrument prompts you, enter this measured current value.
  • Run the output trim test with the “high” setting.
  • With a precise milliammeter, measure the output signal and record the value once it has stabilised.
  • When the instrument prompts you, enter this measured current value.

The lower- and upper-range values of a smart transmitter can be set once both the input and output (ADC and DAC) have been trimmed (i.e. calibrated against known-to-be-accurate standard references).

Non-linear Instruments

Calibration of nonlinear equipment is substantially more difficult than calibration of linear instruments. Because more than two points are required to define a curve, two adjustments (zero and span) are no longer sufficient.

Nonlinear devices include expanded-scale electrical metres, square root characterizers, and position-characterized control valves.

Because each nonlinear equipment has its own calibration technique, I’ll refer you to the manufacturer’s literature for your specific instrument. I will, however, offer one bit of advice: document all of the modifications you make when calibrating a nonlinear device (e.g., how many turns on each calibration screw) in case you need to “re-set” the instrument to its original condition.

Discrete Instruments

Individual or distinct is what the word “discrete” signifies. A true-or-false condition is referred to as a “discrete” variable or measurement in engineering. As a result, a discrete sensor can only tell you if the measured variable is above or below a predetermined setpoint.

Discrete instruments, like continuous instruments, require periodic calibration. The set-point or trip-point is the only calibration adjustment available on most discrete devices. There are two adjustments on some process switches: a set-point adjustment and a deadband adjustment.

A deadband adjustment’s aim is to offer a configurable buffer region that must be traversed before the switch changes state. The set-point for our 85 PSI low air pressure switch would be 85 PSI, but if the deadband was 5 PSI, the switch would not change state until the pressure climbed over 90 PSI (85 PSI + 5 PSI).

When calibrating a discrete instrument, make careful to check the set-point accuracy in the correct direction of stimulus change. In the case of our air pressure switch, this means making sure the switch changes state when the pressure drops below 85 PSI rather than rising over 85 PSI.

How often should instruments be calibrated?

Many factors influence instrument performance and, as a result, the required calibration interval, including:

  • Manufacturer’s instructions
  • Manufacturer’s precision requirements
  • Specification for stability (short term vs. long term)
  • Process precision is required.
  • Environmental characteristics that are typical (harsh vs. climate controlled)
  • Requirements for regulatory or quality standards
  • Expenses incurred as a result of a failed condition

Other factors that influence the calibration period include:

Instrument’s workload: If the instrument is used frequently, it should be calibrated more frequently than if it is used infrequently.

Environmental Conditions: An instrument that is used in harsh environments should be calibrated more frequently than one that is used in more stable environments.

Transportation: If the instrument is transported to various locations, you need to calibrate the device more frequently.

Accidental drop/shock: If you accidentally drop or shock an instrument, it’s a good idea to have it calibrated.

Intermediate checks: the instrument can be tested in some instances by comparing it to another instrument or an internal reference.

Pass/Fail tolerance

An instrument’s criticality analysis would be a good place to start. Tolerance, on the other hand, is inextricably linked to the first concern about calibrating frequency. A “tight” tolerance may necessitate more frequent testing with a highly accurate test standard, whereas a less crucial measurement using a very accurate instrument may not necessitate calibration for years.

Calibration procedures

Another topic to be solved is how to develop and apply proper calibration techniques and practices. In the vast majority of cases, a site’s procedures have not changed over time. Many calibration professionals adhere to procedures that have been in place for many years, and it is not uncommon to hear, “This is how we have always done it.” In the meantime, measurement technology continues to advance and become more precise. The calibration technician’s methods and processes should improve as measurement technology advances.

Finding the optimum

Finally, plant management should be aware that the tighter the tolerance, the more expensive an accurate measurement will be. It’s a proven truth that all instruments drift to some extent. It’s also worth noting that each make/model instrument has its own “personality” when it comes to processing performance. Only by recording calibration in a fashion that allows performance and drifts to be examined can optimal calibration parameters be determined.

Once these parameters have been established, the costs of performing a calibration can be calculated to see if purchasing a more sophisticated instrument with better performance specifications or purchasing more accurate test equipment is justified in order to improve process performance even further.