Recurrent neural networks for state estimation

Some attempts have been made to estimate important states in batch and fed-batch bioreaction using RNNs. In the beginning, an RNN had either of two basic configurations - the Elman form or the Jordan form [37, 38]. The original purposes of these two networks were to control robots and to recognize speech. Later, due to their intrinsic dynamic nature, RNNs drew considerable attention in the research area of biochemical engineering [7]. An application of an Elman RNN to fed-batch fermentation with recombinant Escherichia coli was reported by Patnaik [39]. The Elman RNN was employed to predict four state variables in the case of flow failure. The performance of the RNN was found to be superior to that of the FNN network. Since both of the Elman and Jordan networks are structurally locally recurrent, they are rather limited in terms of including past information. A recurrent trainable neural network (RTNN) model was proposed to predict and control fed-batch fermentation of Bacillus thuringiensis [40]. This two layer network has recurrent connections in the hidden layer. the Backpropagation algorithm was used to train the network. The results showed that the RTNN was reliable in predicting fermentation kinetics provided that sufficient training data sets were available. In this research, RNNs with both activation feedback and output feedback connections are used for on-line biomass prediction of fed-batch baker's yeast fermentation.

A moving window, feed-forward, backpropagation neural network was proposed to estimate the consumed sugar concentration [41]. Since the FNN was primarily used for nonlinear static mapping, the dynamic nature of the fed-batch culture was imposed by the moving window technique. The data measured one hour ago was used to predict the current state. The oldest data were discarded and the newest data were fed in through the moving window method. In a new approach, the RNN was adopted to predict the biomass concentration in baker's yeast fed-batch fermentation processes [42]. In contrast to FNNs, the structure of RNNs consists of both feed-forward and feedback connections. As a result of feedback connections, explicit use of the past outputs of the system is not necessary for prediction. The only inputs to the network are the current state variables. Thus, the moving window technique is not necessary in this RNN approach for biomass concentration estimation.

Was this article helpful?

0 0

Post a comment