The basic idea of turbo codes is to use two convolutional codes in parallel with some kind of interleaving in between.
Diagram of Turbo Encoder:
- It consists of two parallel convolutional encoders separated by an interleaver, with the input to the channel being the data bits m along with the parity bits X1 and X2 output from each of the encoders in response to input m.
- Since the m information bits are transmitted as part of the code word, we call this a systematic turbo code
Diagram of Turbo Decoder:
- In this figure Decoder1 generates a soft decision in the form of a probability measure p(m1) on the transmitted information bits based on the received code word (m,X1).
- This reliability information is passed to Decoder 2, which generates its own probability measure p(m2) from its received code word (m,X2) and the probability measure p(m1).
- Ideally the decoders eventually agree on probability measures that reduce to hard decisions m = m1 = m2.
- The stopping condition for turbo decoding is not well-defined, in part because there are many cases in which the turbo decoding algorithm does not converge; i.e., the decoders cannot agree on the value of m.
This curve indicates several important aspects of turbo codes.
- Bit error probability of 10−6 at an Eb/N0 of less than 1 dB
- The turbo decoder allows these codes to be decoded without excessive complexity
- The amazing performance of turbo codes is that the code complexity introduced by the encoding structure is similar to the codes that achieve Shannon capacity