If you use WinCal™ to calculate on-wafer calibration coefficients, you might notice a difference in those values when comparing with built-in SOLT from your Vector Network Analyzer and LRRM. This is normal and here’s why.

A fundamental limitation of the SOLT calibration algorithm is that all of the standards (short, open, and load on each port and thru connection) must have fully known electrical behavior. This behavior is typically condensed to a ‘cal-kit’ which is a collection of equivalent circuits for the standards. Any differences between the modeled electrical behavior and the actual behavior during the calibration process will result in errors in the resultant error correction terms and cause inaccurate correction of measurement data.

In the coaxial world the standard behavior is consistent and SOLT is a reasonable choice. In the probing world our ability to place probes consistently on the standards is limited by positioning accuracy of the prober (small) and the operator’s positioning of the standards in a manner consistent with the configuration used when the cal-kit model terms were originally determined (large – typically no better than 5 µm). Other differences such as slight differences in standards and probes will also contribute to differences between the cal-kit model behavior and the actual behavior.

Advanced on-wafer calibrations were developed to reduce the dependency on known behavior of SOLT. The LRRM calibration requires a fully known line (thru) standard to define where the measurement reference planes will be located and a series resistor-inductor match standard where the resistance is known from the dc value and the inductance is unknown but constant with frequency. The reflect standards need only be equal on the two ports for each equal reflect standard. Ideally the two different equal reflect standards are significantly different such as an open and a short. The standards required for LRRM are the same as for SOLT, only the calibration assumes less about the behavior of the standards.

Both SOLT and LRRM calibrations are imperfect, relying on constant error terms dependent only on a single mode of propagation. In practice all probes are not perfect in this regard and the error correction terms are inaccurate to some degree (GS probes are worse than GSG and wide pitch probes tend to be worse than narrower pitch probes). With SOLT, re-measurement of the calibration standards will show the cal-kit definition behavior with the only deviation due to system repeatability. This is forced by the nature of the SOLT calibration math. However, measurement of an additional structure with known behavior that wasn’t used during calibration will show the inherent errors associated with imperfect calibration as well as the errors associated with differences between actual standard behavior and cal-kit definitions.

In advanced calibrations the reflect standards do not have forced behavior so the inherent errors will show up when examining corrected behavior of the original reflect standards. This is most true in the case of the short standards which stresses the non-ideality the most by causing energy to be present on the outside of the coax (parasitic ground-mode). Indeed it is not unusual to have reflect standard behavior that has greater than unity reflection coefficient after correction. While this is clearly non-physical it would be wrong to conclude that since SOLT standard re-measurement doesn’t do this that SOLT will provide a better result on measurement of an arbitrary DUT. In practice the LRRM will provide more consistent calibrations since it is least sensitive to variations of probe placement, probes, and standards that may occur. The bottom line is that our users see better measurement reproducibility using LRRM than using SOLT, particularly at mm-wavelengths.

Have you experienced this? We’d love to hear from you and what your thoughts on this matter are, so drop a comment below.