A log-likelihood ratio (LLR) measures the reliability (and uncertainty) of a binary random variable being a zero versus being a one. LLRs are used as input in many implementations of decoding algorithms which also output LLRs. Mismatches in the outputs are, for example, generated by a decoder which is implemented by using approximations during its computations e.g. the symbol-by-symbol max-log a posteriori probability (APP) algorithm versus the correct forward-backward (log-APP) algorithm or Hagenauer's approximation of the box function. We propose post-processing of output LLRs to correct part of the mismatches. This post-processing is a function of the statistics of the input LLRs. As examples, we study the effect of incorrectly scaled inputs to the box function leading to mismatched outputs, Hagenauer's approximation to the box function, and the effect of compensating mismatches of LLRs on the performance of iterative decoders.