## Basic Framework

The hierarchy in the population analyses is—population (fixed effects), individual (random effects), and then each observation (residual error). A complete population PK/PD model usually constitutes of four structural and three statistical (error) models. The four structural models include: (1) PK model, (2) disease progression model, (3) PD model, and (4) covariate (or prognostic factor) model. The parameters of these models are called as "fixed effects." Examples of fixed effects include the typical value of systemic clearance in a 70-kg person and the mean potency of the drug. The three statistical models include: (1) inter-individual variability (IIV) model, (2) inter-occasion variability (IOC) model, and (3) residual error model. The parameters of the IIV/IOC model are called as "random effects." The random effects models assume that the inter-individual errors (n) are distributed with a mean zero and a variance of. The residual error model assumed that the measurement (and model mis-specification) errors are distributed with a mean zero and a variance o2. Nonlinear "mixed" effects models deal with both fixed and random effects simultaneously, hence the name.

The framework of the mixed effects models is illustrated in Fig. 1. Consider a one-compartment model when the drug was given as an intravenous bolus. Let us also assume that the volume of distribution (V) is identical in every individual (no inter-individual variability). The concentration in the "ith" subject at the "jth" time point can be described using the following equations:

Where CLi is the estimated clearance of the "ith" subject, CLPOp is the estimated population mean clearance, nCL,i is the difference between the population and individual clearances, and e is the residual error of the "jth" sample of the "ith" subject. The nCL values are assumed to follow a normal distribution with a mean zero and variance ra2CL. The eij values are assumed to follow a normal distribution with a mean zero and variance ö2. Time

FIGURE 1 The basic framework of nonlinear mixed effects modeling. Consider the "ith" observation in the "/th" subject. The difference between the observed concentration (solid circle) and the individual predicted concentration (broken line) is due to the fact that the "ith" individual's clearance (CLi) is different from the population clearance (CLPOP) by a value of ^CU/. An additional source of variability is the residual error (e,y) which is primarily due to model mis-specification and measurement error. The nCL values follow a normal distribution with a mean zero and variance m2CL. The eij values follows a normal distribution with a mean zero and variance a2. According to the present example, the NM model would estimate the parameters—CLPOR m2CL, and a2.

Time

FIGURE 1 The basic framework of nonlinear mixed effects modeling. Consider the "ith" observation in the "/th" subject. The difference between the observed concentration (solid circle) and the individual predicted concentration (broken line) is due to the fact that the "ith" individual's clearance (CLi) is different from the population clearance (CLPOP) by a value of ^CU/. An additional source of variability is the residual error (e,y) which is primarily due to model mis-specification and measurement error. The nCL values follow a normal distribution with a mean zero and variance m2CL. The eij values follows a normal distribution with a mean zero and variance a2. According to the present example, the NM model would estimate the parameters—CLPOR m2CL, and a2.

Of the several population analyses techniques, the most popular are: (1) naïve pooled analysis, (2) two-stage analysis (TS), and (3) nonlinear mixed effects analysis (NM). The naïve pooled analysis is performed by pooling data from all subjects (as if all the data are from a single "giant" subject). A minor variation of this method involves analysis of the mean data. Both the methods provide only the central tendency of the model parameters and no random effects are estimated. These methods are applied more routinely when dealing with preclinical data. Naive pooled analysis is appealing because of its simplicity. No sophisticated software is required. The fact that the random effects cannot be estimated and inter-individual variability cannot be accounted using covariates (such as body size, age, etc.) makes the potential of naïve pooled data modeling very limited.

The TS method is a reasonably powerful method to estimate both the central tendency and inter-individual variability. The first stage involves the estimation of the individual parameters and the second stage involves the estimation of the population mean and variance of the parameters, after adjusting for covariates if necessary. The TS method requires that enough number of samples (greater than the number of model parameters) per subject are collected, as is the typical case with experimental data. This method assumes that the individual parameters, estimated in stage one, are the true values for the calculations in stage two. By and large, this is a relatively minor concern. The more serious drawbacks include modeling sparse data from observational studies and modeling concentration (or dose)-dependent nonlinear processes. Consider a drug whose elimination follows Michaelis-Menten type kinetics. The data from the lower doses (or higher doses) alone may not render enough information to estimate both the maximal velocity (Vmax) and concentration for half-maximal velocity (km). The same argument applies when estimating the parameters of an Emax model.

Nonlinear mixed effects modeling probably is the most powerful technique for analyzing experimental and observational data due to several reasons. Mainly, the NM method does not share the drawbacks of the other methods discussed above. Both stages of the TS method are performed in one step, hence NM technique is also known as the one-stage method. One of the chief advantages of the NM method is its ability to conduct meta-analysis that is valuable in summarizing data across a drug development program. The primary disadvantage of this method is the requirement of sophisticated software that is not necessarily user-friendly for a wider application. Usually, special training is required to use the software packages and the learning resources are limited.