Author: Gerald de Fontenay, Global Project Manager
SGS Life Science Services, Geneva


The goal of method validation is to provide proof that data from each sample is indicative of the actual batch content with little deviation. Beyond satisfying requirements from regulatory authorities, the overall purpose is to evaluate method performances, which enables informed decisions to be made with the product specification results.

Official requirements for method validation are clearly defined in the ICH Q2 (R1) guideline. These requirements define a minimal number of method qualification/verification experiments per characteristic to be validated. For earlier steps, these requirements are not mandatory. However, a method generating poor data can lead to poor development decisions. Therefore, it is important that method developers provide users with verification results and ensure that users have a clear understanding of method limits.

Estimating the risk of generating false-negative or false-positive results based on the validation results can help users to clearly understand two important principles:

  • A false-positive result is a Patient-related risk. Because the result is within specification limits, it is unlikely to lead to any laboratory investigation, but the released batch may ultimately have issues with efficacy and/or safety. Efficacy may be compromised if the actual content is below the lower specification limit. On the other hand, the batch may have safety issues if the actual content is above the specification limit (depending on toxicity of the active substance and/or of its related substances). It is important to reduce potential risk to the patient much as possible.
  • A false-negative out of specification (OOS) result has an Industrial impact, as a batch may be rejected when the actual content is within specification limits. If goods are discarded based on these results, the unnecessary impact is financial in nature. False-negative results also have a financial impact through the large amount of unnecessary paperwork that leads to no clear conclusions. This risk cannot be neglected in the industry.

Here we describe how some calculations performed with Excel software can be useful for such in-depth evaluation for an assay method using known (pre-)validation results.


USP Medicine Compendium chapter <10> (“Assessing Validation Parameters for Reference and Acceptable Procedures”, available at USP; here referred to as MC<10>) describes calculations (with the Excel NORMDIST function) that take into account precision and accuracy product specification results obtained during method validation.

In MC<10>, a method is evaluated as a whole unit, wherein the probability of obtaining a result within specifications is calculated for a “perfect batch” (a batch having an actual content of 100.0%).

[Eq 1] P = NORMDIST(Upper, Mean, SD, TRUE) - NORMDIST(Lower, Mean, SD, TRUE)
        Upper = Upper limit of the Acceptance range (105.0%)
        Lower = Lower limit of the Acceptance range (95.0%)
        Mean  = Accuracy result (recovery percentage)
        SD = Standard deviation (from a precision study)
        TRUE  = Logical operator (provides an area under the curve between the 
        2 limits)

The result P is a probability that should be superior to 95%, meaning the risk of generating a false-negative OOS result is at or below 5%. Several examples of this concept have been previously illustrated [1, 2].

Here, we extend the same concept to a broader range of “actual content”, so that the same risk is associated with a batch having an actual content between 90.0 and 110.0%. This includes batches that are truly OOS in order to also evaluate the risk of generating false-positive results, which is considered here to be a patient-related risk, the one that should be managed first.

The methodology is applied to a chromatographic method for assay of an active substance in a drug product (with specifications 95.0-105.0%). Therefore, calculations with the same NORMDIST Excel function were performed with an additional parameter:

    ACTUAL  = Actual content of batch
[Eq 1] is then modified as follows:
    [Eq 2] P = NORMDIST(Upper, Mean x ACTUAL, SD, TRUE)
         - NORMDIST(Lower, Mean x ACTUAL, SD, TRUE)

If ACTUAL is within specification limits (95.0-105.0%), P represents the probability of obtaining a result within the specifications and (1-P) represents the risk of generating a false-negative OOS result (ie, the Industrial risk of rejecting a batch that is actually within the specifications).

Industrial risk is illustrated in Figure 1, which depicts estimations obtained for a batch with an ACTUAL content = 97.0% with a method having Accuracy = 99.0% and RSD =1.5%


If ACTUAL is below 95.0% or above 105.0%, the batch is OOS.

P represents the probability of generating a false-positive result, which is Patient-related risk and can potentially lead to release of a batch that is actually poor in terms of quality. Patient-related risk is illustrated in Figure 2, which depicts estimations obtained for a batch with an ACTUAL content = 94.0% and the same method having Accuracy = 99.0% and RSD=1.5%.


Extrapolating the probability values of P and (1-P) over a large range of ACTUAL content (90.0-110.0%, with batches that can be either OOS or within specifications) generates the recapitulative graph shown in Figure 3.


Such a recapitulative graph summarizes the method performances according to its intended use, and predicts the capability of such a method to be discriminant or, in other words, whether it can be an efficient decision tool for use in batch release decisions. Table 1 describes several methods that were used as a source of recapitulative graphs.



The global shape of a recapitulative graph (as shown in Figure 3) fully depends on method performances.

For the “ideal method” (Method 0, Table 1), all results will be completely in accordance with the actual content of the batch, with (1-P) = 0 when the batch is within specifications, and P=0 when the batch is OOS. This is illustrated in Figure 4.


Unfortunately, such an ideal method does not exist and several sources of bias are inherent in real-world methods. Therefore, although a method developer’s role is to reduce these sources of variability, they cannot all be reduced to null.

From these sources of variability, we can cite from experience some classical parameters that potentially influence the analytical results (non-exhaustive list):

  • System precision: Equipment are qualified regularly, but injection volume, flow rate, detection, peak separation and integration may vary from one injection to another.
  • Sample preparation: Despite a sample being representative, its weight, dilution and extraction will vary from one sample preparation to another. Furthermore, results may vary over time between sample preparation and sample injection due to solution stability. If a derivation step is required, this variability may be even greater.
  • Day-to-day variation: Analyst, instrument, reference standard and reagents may vary. Influence of external parameters (temperature, humidity, light) may also lead to variability. The batch itself (or a sample of the batch) may not be as homogeneous as expected.

These sources of potential bias impair both Accuracy and Precision, and therefore global method performances. Figures 5–9, corresponding to the recapitulative graphs for Methods 1 to 5 (as defined in Table 1), illustrate the impact of method performances on the data reliability and on the batch release decision.







Figure 5 demonstrates the significant impact a method with poor precision can have on data reliability. Both Industrial (1-P) and Patient-related (P) risks are important, regardless of actual content. Roughly the same risk is associated with accepting a batch with an actual content of 109.0% (P is close to 10%) as the risk of rejecting a perfect batch with an actual content of 100.0% (1-P is also close to 10%).

In Figure 6, the shape is similar to the Ideal curve (Figure 4), and proves that Method 2 provides reliable results.

Comparing the graphs in Figures 7 and 8 highlight the influence of method accuracy on the batch release decision. A systematic bias (Accuracy far below 99.0%) can lead to acceptance of batches with an actual content above the upper specification limit. If the product has a narrow therapeutic range (and could then induce toxicity when above the specifications), such a method may not be suitable for its intended use.

Lastly, Figure 9 corresponds to a method at the limit of classical Acceptance Criteria during validation: RSD at or below 2.0% and Accuracy between 98.0 and 102.0%, which are Acceptance Criteria widely used in the Pharmaceutical industry. Again, consideration of both Industrial (1-P) and Patient-related (P) risks are important, regardless of actual content. Roughly the same risk (P about 10%) is associated with accepting a batch with an actual content of 110.0%, and (1-P about 10%) to reject an almost-perfect batch with an actual content of 99.0%.


A clear understanding of how method performances impact data reliability is important in order for lab managers and Qualified Persons to make accurate decisions concerning batch release. Cross-checking these data with those from the Manufacturing team (expected range of Active substance content, according to Manufacturing Process variability) also aids companies in improving the outcome of Annual Product Reviews, as well as understanding and limiting Patient-related risks without neglecting the Industrial risk.


  1. G. de Fontenay et al., Analysis of the performance of analytical method ; Risk analysis for a routine use. STP Pharma Pratique, Volume 21, Issue 2, March-April 2011, Pages 123-132
  2. Small Molecules Collaborative Group and USPC Staff, Performance Based Monographs. Pharm. Forum, 35 (3), 765-771, 2009.