Turing machines are computational models that are particularly suited for studying deterministic offline computations on digital batch input in discrete time with a number of internal states, which is finite at any time t, but may grow unboundedly with t. Turing machines are universal in the sense that one has not been able to come up with any digital computation that cannot be carried out by a Turing machine.
Digital or analog feedforward circuits (e.g., feedforward Boolean circuits or feedforward neural nets) constitute another class of standard models for offline computing. They can also be used for real-time computing (in the form of pipelining) since their computation time on any (batch) input is limited by the depth of the circuit. However, they can only be used for those computations on sequences of inputs where no temporal integration of information from several successively presented batch inputs is required.
Finite automata are computational models with a fixed finite set of internal states that are suitable for online computations in discrete time; in fact, they are perfectly suited for real-time computing. Their current state can hold information about current and past inputs, and their state transitions can be deterministic or stochastic. Cellular automata are ensembles of infinitely many identical copies of some finite automaton located on the nodes of some infinite graph, where every node has the same finite number of neighbors (e.g., a two-dimensional grid). At every discrete time step, each automaton changes its state and determines the output to its direct neighbors dependent on the inputs that it has received from its direct neighbors at the end of the preceding time step.2 The input to a cellular automaton is commonly encoded in the initial states of the individual finite automata. It is well known that every Turing machine (hence any currently existing digital computer) canbe simulatedby some cellular automaton.3
Artificial neural networks are also usually considered only with discrete time, but with analog internal states (so that even a single internal state may contain infinitely many bits of information). Both deterministic and stochastic state transitions are considered. Feedforward neural nets are primarily used for
2 The well-known "Game of Life" is an example of such a cellular automaton.
3 To be precise, this holds only for a cellular automaton consisting of infinitely many cells. A cellular automaton consisting of just finitely many cells can only simulate those Turing machine computations that can be carried out within a corresponding space bound. Basic results from computational complexity theory imply that the computational power of Turing machines (and hence also of cellular automata) does not saturate at any finite space bound.
offline computing since they cannot integrate information from successive inputs; however, recurrent neural nets are suitable for offline and online computations. Although the learning capability of artificial neural networks is viewed as one of their essential properties, the organization of learning for neural networks is usually not controlled by the network itself, but by an external supervisor.4 From that perspective neural network models are far away from being autonomously learning systems.
Genetic (or evolutionary) algorithms are programs for computational models with stochastic state transitions in discrete time whose computational goal is the generation of formal objects ("agents") that have high "fitness" according to a fitness function that is part of the program.
Was this article helpful?