## Optimization of Fedbatch Fermentation Processes using Genetic Algorithms based on Cascade Dynamic Neural Network Models

A combination of a cascade RNN model and a modified GA for optimizing a fed-batch bioreactor is investigated in this chapter. The complex nonlinear relationship between the manipulated feed rate and the biomass product is described by two recurrent neural sub-models. Based on the neural model, the modified GA is employed to determine a smooth optimal feed rate profile. The final biomass quantity yields from the optimal feed rate profile based on the neural network model reaches 99.8% of the "real" optimal value obtained based on a mechanistic model.

### 5.1 Introduction

Mechanistic models are conventionally used to develop optimal control strategies for bioprocesses [100, 101, 102, 103]. However, to obtain a mechanistic model for bioprocesses is a time-consuming and costly work. The major challenge is the complex and time-varying characteristics of such processes.

In Chapter 4, a softsensor is proposed using RNN for predicting biomass concentration from the measurement of DO, feed rate and volume. In this chapter, we intend to model the fed-batch fermentation of Saccharomyces cerevisiae from the input of feed rate to the output of biomass concentration by cascading two softsensors.

An example of a recurrent dynamic neural network is illustrated in Figure 1.4 in Chapter 1. In this structure, besides the output feedback, the activation feedbacks are also incorporated into the network, and TDLs are used to handle the delays. A dynamic model is built by cascading two such extended RNNs for predicting biomass concentration. The aim of building the neural model is to predict biomass concentration based purely on the information of the feed rate. The model can then be used to maximize the final quantity of biomass at the end of the reaction time by manipulating the feed rate profiles.

This chapter is organized as follows: in Section 5.2, the mechanistic model of industrial baker's yeast fed-batch bioreaction is given; in Section 5.3, the

L. Z. Chen et al.: Modelling and Optimization of Biotechnological Processes, Studies in Computational Intelligence (SCI) 15, 57-70 (2006)

www.springerlink.com © Springer-Verlag Berlin Heidelberg 2006

development of the cascade RNN model is presented; Section 5.4 shows the results of neural model prediction; the optimization of feed rate profile using the modified GA is described in Section 5.5; Section 5.6 summarizes this chapter.

5.2 The Industry Baker's Yeast Fed-batch Bioreactor

The mathematical model, which consists of six differential equations [17,92], was used to generate simulation data. The details of the model parameters and initial conditions are given in Appendix B. Three output variables, biomass, DO and volume were generated from a given feed rate by solving the six differential equations:

^VCl = Qc • V • X + kLac • (C* - Cc) • V (51)

dt 1

where, Cs, C0, Cc, Ce , X, and V are state variables which denote concentrations of glucose, dissolved oxygen, carbon dioxide, ethanol, and biomass, respectively; V is the liquid volume of the fermentation; F is the feed rate which is the input of the system; m is the glucose consumption rate for the maintenance energy; Qe,pr, Q0, Qc and Qe,0x are ethanol production rate, oxygen consumption rate, carbon dioxide production rate and oxidative ethanol metabolism, correspondingly; Ye/s and YX0xs are yield coefficients; kLa0 and kLac are volumetric mass transfer coefficients; S0 is the concentration of feed.

Five different feed rate profiles, which are shown in Figure 4.3 as given in Chapter 4, were chosen to generate training and testing data: (1) the square-wave feed flow, (2) the saw-wave feed flow, (3) the stair-shape feed flow, (4) the industrial feeding policy, (5) the random-steps feed flow.

5.3 Development of Dynamic Neural Network Model

### Cascade dynamic neural network model

A dynamic neural network model is proposed in this study using a cascade structure as shown in Figure 5.1. It contains two extended recurrent neural blocks which model the dynamics from inputs, F and V, to the key variable C0 and the fermentation output (product) X. The first block estimates the trend of C0 which provides important information to the second neural block. The second neural block acts exactly as a softsensor developed in the researcher's previous work [86], which is described in Chapter 4, except that instead of the measured value of DO, the estimated value of DO is used here as the input of the second neural block. The softsensor model requires DO data measured on-line, whereas the cascade dynamic model proposed in Figure 5.1 basically needs only the data of the feed rate to predict the biomass concentration. Although the volume is another input for the model, it can be simply calculated by using Equation B.6 as shown in Appendix B.

Block 1

Block 2

Block 1

Block 2

In each of the neural blocks, both feed-forward and feedback paths are connected through TDLs in order to enhance the dynamic behaviors. All connections could be multiple paths. Sigmoid activation functions are used for the hidden layers and a pure linear function is used for the output layers. The structure of the neural blocks reflects the differential relationships between inputs and outputs as given by Equation B.2 to Equation B.6. A full mathematical description of the cascade model is given in the following equations. The output of the i-th neuron in the first hidden layer is of the form:

hii(t) = /i(£ Wl ui(t - j) + £ WR C0(t - k) j=0 k=1 n.

i=i where, u1 and h1 are the vector values of the neural network input and the first hidden layer's output, correspondingly; Co is the second hidden layer output; bH 1 is the bias of i-th neuron in first hidden layer; na, nb, nc are the number of input delays, the number of the second hidden layer feedback delays and the number of first hidden layer feedback delays, respectively; /i(-) is a sigmoidal function; W- are the weights connecting the j-th delayed input to i-th neuron in the first hidden layer, WRR are the weights connecting the fc-th delayed second hidden layer output feedback to the i-th neuron in the first hidden layer, WH1 are the weights connecting the Z-th delayed activation feedback to the i-th neuron in the first hidden layer.

Note that one neuron is placed at the output of the second hidden layer, so that:

where, /2( ) is a pure linear function; Wmi are the weights connecting the m-th neuron in the first hidden layer to the second hidden layer; ng is the number of neurons in the first hidden layer; bY is the bias of the second hidden layer.

The second neural block has an additional input, Co. Similar to the first block, the output of i-th neuron in the third hidden layer can be described as:

h3i(t) = /i (£ Wp U2 (t - j) + £ W° X(t - fc) j=0 k=1 n.

i=i where, u2 and h3 are the vector values of the input to the third hidden layer and the third hidden layer's output, correspondingly; X is the model's output; bH3 is the bias of i-th neuron in the third hidden layer; nd, ne, nf are the number of input delays to the third hidden layer, the number of the output layer feedback delays and the number of third hidden layer feedback delays, respectively; /i(-) is the sigmoidal function; Wpj are the weights connecting the j-th delayed input of the third hidden layer to the i-th hidden neuron in the layer, W° are the weights connecting the fc-th delayed output feedback to the i-th neuron in the third hidden layer, WH3 are the weights connecting the Z-th delayed activation feedback to the i-th neuron in the third hidden layer.

The model's output, which is the estimated biomass concentration can be expressed as:

where, /2( ) is a pure linear function; W^ are the weights connecting the rn-th neuron in the third hidden layer to the output layer; nk is the number of neurons in the third hidden layer; bX is the bias of the output layer.

Neural network training

A schematic illustration of the neural network model training is shown in Figure 5.2. The output of the bioprocess is used only for training the network. The model predicts the process output using the same input as the process after training. No additional measurements are needed during the prediction phase.

The goal of network training is to minimize the MSE between the measured value and the neural network's output by adjusting it's weights and biases. The LMBP training algorithm is adopted to train the neural networks due to its fast convergence and memory efficiency [34].

To prevent the neural network from being over-trained, an early stopping method is used here. A set of data which is different from the training data set (e.g., saw-wave) is used as a validation data set. The error on the validation data set is monitored during the training process. The validation error will normally decrease during the initial phase of training. However, when the network begins to over-fit the data, the error on the validation set typically begins to rise. When the validation error increases for a specified number of iterations, the training is stopped, and the weights and biases of the network at the minimum of the validation error are obtained.

The rest of the data sets, which are not seen by the neural network during the training period, are used in examining the trained network. The performance function that is used for testing the neural networks is the RMSP error index [57], which is defined in Equation 4.5.

A smaller error on the testing data set means the trained network has achieved better generalization. Two different training patterns, overall training and separated training, are studied. When the overall training is used, the whole network is trained together. When the separated training is used, block one and block two are trained separately. A number of networks with different numbers of hidden neuron delays are trained. For each network structure, 50 networks are trained; the one that produces the smallest RMSP error for the testing data sets is retained. The number of hidden neurons for the first hidden layer and the third hidden layer are 12 and 10 respectively. Errors for different training patterns and various combinations of input and feedback delays are shown in Figure 5.3. As shown in this figure, the 6/4/4 structure (the feed rate delays are six, the first block output delays and the second block output delays are four) has the smallest error and is chosen as the process model. The separated training method is more time-consuming but is not superior to the overall training. Thus, the overall training is chosen to train the network whenever new data is available.

I I Overall training

Separated training

2/2/2 3/3/3 4/4/4 5/5/5 6/4/4 6/5/5 6/6/6 Number of delays

Fig. 5.3. Biomass prediction error on testing data sets for neural models with different combinations of delays. '6/4/4' indicates that the number of feed rate delays is six; the number of the first block output feedback delays is four; and the number of the second block output feedback delays is four.

2/2/2 3/3/3 4/4/4 5/5/5 6/4/4 6/5/5 6/6/6 Number of delays

Fig. 5.3. Biomass prediction error on testing data sets for neural models with different combinations of delays. '6/4/4' indicates that the number of feed rate delays is six; the number of the first block output feedback delays is four; and the number of the second block output feedback delays is four.

5.4 Biomass Predictions using the Neural Model

The biomass concentrations predicted by the neural network model and the corresponding feed rates and prediction errors are plotted in Figures 5.4 to 5.6. As shown in these figures, the prediction error is quite big at the initial period of fermentation and gradually becomes smaller and smaller. The prediction error is less than 8%.

3000

■ Model prediction'

Fig. 5.4. Biomass prediction for the industrial feed rate profile.

5.5 Optimization of Feed Rate Profiles

Once the cascade recurrent neural model is built, it can be used to perform the task of feed rate profile optimization. The GA is used in this work to search for the best feed rate profiles.

GAs tend to seek for better and better approximations to a solution of a problem when running from generation to generation. The components and mechanism of GAs are described in Chapter 1 and 2. A simple standard procedure of a GA is summarized here by the following five steps: (i) Create an initial population of a set of random individuals. (ii) Evaluate the fitness of individuals using the objective function. (iii) Select individuals according to their fitness, then perform crossover and mutation operations. (iv) Generate a new population. (v) Repeat steps ii - iv until termination criteria is reached.

The feed flow rate, which is the input of the system described in Section 2, is equally discretized into 150 constant control actions. The total reaction time and the final volume are fixed to be 15 hours and 90,000 liters, respectively. The control vector of the feed rate sequence is:

The optimization problem here is to maximize the amount of biomass quantity at the end of the reaction. Thus, the objective function can be formulated as follows:

where tf is the final reaction time.

The optimization is subject to the constraints given below:

In this study, optimization based on the mathematical model is first performed to find the best feed rate profile and the highest possible final biomass productivity that can be obtained. Then the optimization is performed again using the RNN model. The resulting optimal feed rate is applied to the mathematical model to find the corresponding system responses and the final biomass quantity. As mentioned above, the mathematical model is considered here as the actual "plant". Thus, the suitability of the proposed neural network model can be examined by comparing these two simulation results.

The optimal profile that is obtained by using a standard GAs is highly fluctuating. This makes the optimal feed rate profile less attractive for practical use, because extra control costs are needed and unexpected disturbances may be added into the bioprocesses. In order to eliminate the strong variations on the optimal trajectory, the standard GA is modified. Instead of introducing new filter operators into the GA [80], a simple compensation method is integrated into the evaluation function. The control sequence F is amended inside the evaluation function to produce a smoother curve of feed trajectory while the evolutionary property of the GA is still maintained. This operation has no effect on the final volume. The method includes three steps:

1. Calculate the distance between two neighboring individuals Fi and Fi+1 using d = \Fi - Fi+11, where i £ (1, 2, •••, 150).

2. If d is greater than a predefined value (e.g., 10 L/h) then move Fi and Fi+i by d/3 towards the middle of Fi and Fi+1 to make them closer.

3. Evaluate the performance index J for the new control variables.

4. Repeat steps 1-3 until all individuals in the population have been checked.

The Matlab GAOT software is used to solve the problem. The population size was chosen at 150. The development of the optimal feed rate profiles based on the mechanistic model and neural network model from the initial trajectory to the final shape is illustrated in Figure 5.7 and Figure 5.8. As the number of the generation increases, the feeding trajectory gradually becomes smoother and smoother, and the performance index, J, is also increased. The smoothing procedure works in a more efficient way for the mathematical model; it takes 2000 generations to obtain a smooth profile, while 2500 generations are needed to smooth the profile for the neural network model. This is due to the disturbance rejection nature of the RNN. A small alteration in feed rate is treated as a perturbation, thus the network is rather unsensitive to it.

The optimization results using the modified GA are plotted in Figure 5.9. The results based on the mass balance equations (MBEs) are shown from (a) to (e). As a comparison, the results based on the cascade RNN model are shown from (f) to (j). The responses of the bioreactor to the optimal feed rate based on the neural model are also calculated using the mechanistic model. It can be seen that the two optimal trajectories are quite different. However, the final biomass quantities yielded from the optimal profile based on the neural model is 281, 956 C-mol. This is 99.8% of the yield from the optimal profile based on the mathematical model. Furthermore, the reactions of glucose, ethanol and DO are very similar for both optimal profiles. As shown in the Figure, ethanol is first slowly formed and increased in order to keep the biomass production rate at a high value. In the ending stage of the fermentation, the residual glucose concentration is reduced to zero, and ethanol is consumed in order to make the overall substrate conversion into biomass close to 100%.

Generations=50, J=183625

3000

2500

2000

1500

1000

3500 3000 2500 2000 1500 1000 500

Generations=200, J=238916

Generations=1000, J=263817

Generations=2000, J=282655

4000 3500 3000 2500 2000 1500 1000 500 0

5 10

5 10

4000 3500 3000 2500 2000 1500 1000 500 0

5 10

Fig. 5.7. Evolution of feed rate profile using the modified GA based on the mathematical model.

Generations=100, J=190312

Generations=350, J=230933

4000 3500 3000 2500 2000 1500 1000 500

4000 3500 3000 2500 2000 1500 1000

Generations=1500, J=258658

Generations=2500, J=281956

4000 3500 3000 2500 2000 1500 1000

4000 3500 3000 2500 2000 1500 1000

5 10

5 10

Fig. 5.8. Evolution of feed rate profile using the modified GA based on the RNN model.

3500

3000

2500

2000

1500

1000

Results of optimization based on MBEs

Results of optimization based on RNN model

4000

2000

Results of optimization based on MBEs

4000

1000

Results of optimization based on RNN model

1000

11 wS

x 10

x 10

x 10

## Essentials of Human Physiology

This ebook provides an introductory explanation of the workings of the human body, with an effort to draw connections between the body systems and explain their interdependencies. A framework for the book is homeostasis and how the body maintains balance within each system. This is intended as a first introduction to physiology for a college-level course.

## Post a comment