The Elman networks have context neurons, too, but 1 layer of context neurons per facts processing neuron layer. Thus, the outputs of each one hidden neuron or output neuron are directed into the associated context layer (yet again exactly one context neuron per neuron) or from there it is reentered into the complete neuron layer during the next time step (i.e. again a total link along the way back). So the complete information processing part1 of the MLP exists another time as a "context version" which yet again considerably increases dynamics and state variety. Compared with Jordan networks the Elman networks usually have the advantage to act more purposeful since every layer could access its own context.