Skip to Menu Skip to Search Contact Us Global Websites & Languages Skip to Content


Quite often these days, internet forum discussions or message boards address the important question of whether method validation is necessary in various scenarios and contexts. Validation of analytical methods for the production of pharmaceutical agents responds to regulatory requirements as expressed by GMP rules (21CFR211.165 (e) in the US and Eudralex Vol. 4 in the EU) or other rules (OECD GLP, ISO 17025, ISO 12787 for cosmetics, etc.). These rules should respond scientifically to a key question posed by most people in pharmaceutical production: are the data I generate reliable?

It is indeed surprising that people spend large amounts of time and resources generating analytical data despite being unclear about the accuracy or reproducibility of the process. The reasons for avoiding method validation in these cases are unclear. A possible motive is to save costs up front, but the end result is a considerable amount of wasteful expenditure. How can one optimize a process when the results available for analysis are unreliable? Even the intended use and specific goals of a method may get lost without method validation. Without answers to these basic questions, generating any data at all is of little value. 

In the pharmaceutical industry, the aim of an analytical method is, in most cases, to generate data that is reflective of the actual content of a batch—or at least data that is as closely reflective to this content as possible. Samples should be representative of that batch, and the results of  sample analysis should be very similar to the actual content of the sample. Recently, qualified persons in Europe presented survey data from a study demonstrating the importance of this step (oral presentation from Pierre Poitou at SFSTP 44th International Congress, 6-7 June 2012, Montpellier, Therefore, we will forego discussing sampling and focus on method validation.

The goal of method validation is to provide proof that data from each individual sample is indicative of the actual batch content with little deviation. The purpose of these proofs is not only to satisfy requirements from regulatory authorities, but to ensure the method user that the sample is tested accurately. Furthermore, method validation enables informed decisions to be made with the results that are obtained.

The level of acceptable uncertainty regarding sample data changes during the development of a product. When development is completed, the uncertainty requirements are tightly linked to the specifications set; these specifications ensure product efficacy and patient safety.

During the early steps of development, validation is not absolutely required from a regulatory point of view, but familiarity of sample data reliability is highly recommended. Method qualification or verification procedures should be performed during these important early steps in order to avoid making poor development decisions based on inaccurate data.  

Official requirements for method validation are clearly defined in the ICH Q2 (R1) guideline. These requirements define a minimal number of method qualification/verification experiments per characteristic to be validated. For earlier steps, these requirements are not mandatory. However, generating low-quality data can lead to poor development decision making. Therefore, method developers should provide users with verification results and ensure that users have a clear understanding of method limits.

ICH Q2(R1)-Based Definitions of Characteristics to be Validated


ICH guidelines define accuracy as the closeness in agreement between the value that is accepted and the average value found. The level of accuracy represents the systematic bias observed using a particular method, a concept that the ISO guidelines refers to as Trueness.


Precision represents the random bias of an analytical method. There are three specific types of precision that are useful in method validation:

  • Repeatability: intra-run variability from one sample to another; assumes a homogeneous set of samples
  • Intermediate Precision: inter-run variability with several samples in the same run
  • Reproducibility: inter-laboratory variability; not always useful to determine. The results of method transfers can provide information about this characteristic, which is not part of a marketing authorization dossier.

Both systematic and random biases have an impact on data reliability, as well as on the global closeness of agreement between each individual calculated result and actual content of the batch. These two parameters (accuracy and precision) are generally sufficient to determine the real level of performance of a method.


Specificity is the ability to assess the analyte unequivocally, regardless of other components that may be present, such as impurities, degradants and matrix materials. Any lack of specificity may generate a systematic bias and lead to overestimation of the content of the compound of interest.

Detection Limit and Quantitation Limit

These limits, which are defined based on each experiment, are never “true values”, but can only be approximate estimations. Indeed, the calculated values are biased—like all results obtained during experiments—and may vary from one matrix to another, from one instrument to another, from one day to another, and from one laboratory to another. The intended use is essential when considering detection and quantitation limits: At the reporting and specification limits, the data generated should be reliable according to the intended use of the method.

Linearity / Range

The linearity of an analytical procedure is its ability, within a given range, to obtain test results that are directly proportional to the concentration (amount) of analyte in the sample. The accuracy results over the range can serve as proof of this method linearity. Thus, linearity of the method is distinct from linearity of the response, the latter being defined as the response proportional to the concentration of analyte in the sample. While linearity of method is required, linearity of response is not mandatory. Notably, when the response is linear, the method is also linear. However, when a large range of analyte concentrations is being investigated, linearity of the response is not always observed, and mathematical transformations may be required to ensure method linearity. The specific range will depend on the intended use, and developers should take into account the range of expected results, both within and outside the specifications, as well as the amount of variability accepted for sample weight (+/- 10% of theoretical weight).


The level of robustness is indicative of the capacity for the method to remain unaffected by small, deliberate variations in method parameters. Robustness studies provide an indication of the method’s reliability during normal usage. Despite the fact that robustness is one of the last characteristics described in ICH Q2(R1) guideline, it should be studied prior to other parameters. If variation in a method parameter does affect the reliability of the results, this parameter should be strictly fixed and described in the method description, prior to the validation.

Solution stability can be studied either independently or during a robustness study. However, details about sample stability should be gathered as soon as possible.

It is important to note that, during inspections, authorities now require clear evidence of the robustness of analytical methods. It is therefore worth revisiting validation data in order to ensure that clear data regarding robustness are available.

System Suitability Test (SST)

When a method has been validated and is mastered, weak points of the method are clearly revealed. System suitability tests verify that no problems related to a method’s weaknesses will occur, and this should be confirmed each time the experiment is prepared and run. Beyond verifying the weight and stability of a reference substance, as well as consistent instrument performance, one should consider evaluating the effect of reagents (blank analysis); the instrument’s sensitivity (response at reporting level, signal-to-noise ratio, etc.); and column efficiency (theoretical plate, retention time, peak shape, etc.).

ICH Q2(R1)-Defined Minimum Requirements

ICH guidelines provide some details about the minimum number of experiments that need to be performed and described.


In cases for which the response is linear, at least 5 samples of differing concentrations that cover the entire range should be studied. If the response is not linear, the results calculated should be compared to samples with known concentrations in order to ensure this direct proportionality between calculated and actual content of the analyte in the sample. If the response is linear, the proportionality of calculated and actual results will be clear, which is why linearity of response is often described in validation reports instead of linearity of the method.


At least three replicates at three concentration levels (covering the entire range) are required to determine accuracy. Individual recovery values (calculated content/actual content), as well as average recovery values, should fall within preset acceptance criteria. Acceptance limits are sometimes defined as the upper and lower limit of the 95% confidence interval.


Replicate analyses of samples from a homogeneous batch are required to accurately assess precision. The aim of these studies is to evaluate the overall variability of the method above and below the average value. This variability should be as small as possible, and efforts  to reduce precision variability are required during the development step. Proceeding to the validation step is of little value until an acceptable variability level is reached. Pre-validation or verification steps are therefore highly recommended.


The goal of repeatability studies is to determine the variability that one analyst using one piece of equipment can expect while weighing and preparing the sample.

Repeatability is assessed by calculating the relative standard deviation (RSD%) from accuracy results on spiked samples (three replicates at three concentration levels).  Such design can be recommended for methods that will be used using a broad range of concentrations.
When conducting an assay for which results are expected to fall within a very narrow range, considering the second ICH option is recommended: testing six replicates of a 100% sample. Working directly on a “real sample” is sometimes recommended, as 100% spiked samples do not always accurately represent the difficulty of extracting the compound(s) of interest (except for true solutions). It is important to note that the six replicates are six different determinations from the same homogeneous batch, and not six replicate injections of the same sample solution.

The relative amount of variability is generally proportional to the extent of sample manipulation. For example, far less variability may be expected from a single dilution than from a liquid/liquid extraction followed by a derivation step.

Intermediate Precision

To determine the intermediate precision of a method, new sources of variability are studied. Several repeatability studies are performed while varying the preparation of reagents and reference substances, as well as changes in analyst, equipment, and time. Ensuring inter-season stability of the method is unfortunately not possible during the validation exercise. An annual product review is a good time to assess this parameter, which is often neglected. Indeed the annual product review is the point in which intermediate precision can strongly impact stability results and feasibility of transferring the method to countries with very different environments, including weather conditions.

ICH guidelines do not provide details about the number of experiments that should be performed for intermediate precision. However, a design of experiment (DOE) is encouraged. Although it may seem obvious from a statistical point of view, it is important to note that precision assessed by determining the inter-run RSD can be estimated only if at least three independent runs are performed. Too often validation efforts are stopped after only two separate runs of six determinations each. In these cases, the global RSD observed then cannot be separated clearly into intra- and inter-run RSDs, thus yielding only partial knowledge about a particular method, even if such validation design is fully accepted by most regulatory bodies.

From Validation to Routine

The aim of analytical methods is to perform routine testing on samples with unknown content. Estimations calculated during this testing should allow the Qualified Person to make informed decisions with respect to batch release. This decision step requires an extensive, accurate knowledge of methods, as well as method limits and overall performance which is one goal of the validation step.

It is important to emphasize that validation of a particular method is only a snapshot of its real performance within a defined environment. This environment is generally where the method was developed and, therefore, where method knowledge already exists. In the event a parameter changes, the validated status of the method has to be verified. Changes can occur in the sample (such as differences in matrix, concentration), reagents (such as the reference substance, including its stability), equipment, analyst, and laboratory, to mention only a few.

If a change is small enough, its effect can be considered negligible and the required verification step may be waived as soon as the relevant records demonstrating a proven track record are available, if requested by regulatory authorities. Alternatively, if changes to one parameter have a significant impact, the reliability of the results should be evaluated through comparative testing or a partial revalidation. For cases in which several parameters are changed simultaneously, revalidation of the entire method should be considered.

Similar to the annual performance qualifications of a piece of equipment, the revalidation exercise can serve as a useful tool to maintain process integrity and ensure that decisions for batch release are correct. Revalidation exercises take into account all changes that may occur, including the performance of new equipment, and ensure that all regulatory needs, expressed in 2005 in the R1 revision of ICH Q2 guidelines, are completely fulfilled.

Gerald de Fontenay
Global Project Manager - Laboratory Services
SGS Life Science Services