Algorithms for Adaptive Equalization
Since an adaptive equalizer compensates for an unknown and time-varying channel, it requires a specific algorithm to update the equalizer coefficients and track the channel variations. A wide range of algorithms exist to adapt the filter coefficients.
The performance of an algorithm is determined by various factors which include:
- Rate of convergenceThis is defined as the number of iterations required for the algorithm, in response to stationary inputs, to converge close enough to the optimum solution.A fast rate of convergence allows the algorithm to adapt rapidly to a stationary environment of unknown statistics.
- Misadjustment: This parameter provides a quantitative measure of the amount by which the final value of the mean square error, averaged over an ensemble of adaptive filters, deviates from the optimal minimum mean square error.
- Computational complexity:This is the number of operations required to make one complete iteration of the algorithm
- Numerical properties:When an algorithm is implemented numerically, inaccuracies are produced due to round-oft' noise and representation errors in the computer.These kinds of errors influence the stability of the algorithm.
Three classic equalizer algorithms:
- Zero Forcing Algorithm
- Least mean squares (LMS) algorithm
- Recursive least squares (RLS) algorithm
Zero Forcing Algorithm:
- In a zero forcing equalizer, the equalizer coefficients Cn are chosen to force the samples of the combined channel and equalizer impulse response to zero at all but one of the NT spaced sample points in the tapped delay line filter.
- The zero forcing equalizer has the disadvantage that the inverse filter may excessively amplify noise at frequencies where the folded channel spectrum has high attenuation.
- The ZF equalizer thus neglects the effect of noise altogether, and is not often used for wireless links.
Least Mean Square Algorithm:Robust equalizer is the LMS equalizer
- The criterion used is the minimization of the mean square error (MSE) between the desired equalizer output and the actual equalizer output. .
- The LMS algorithm is the simplest equalization algorithm and requires only 2N 1 operations per iteration.Letting the variable n denote the sequence of iterations, LMS is computed iteratively by
- The convergence rate of the LMS algorithm is slow due to the fact that there is only one parameter, the step size a, that controls the adaptation rate.
- To prevent the adaptation from becoming unstable, the value of α is chosen from
- λI ith eigenvalue of the covariance matrix RNN.
Recursive least squares (RLS):
- In order to achieve faster convergence, complex algorithms which involve additional parameters are used.
- Atechniques which significantly improves the convergence of adaptive equalizers is known asRecursive least squares (RLS).
- The least square error based on the time average is defined as
- λ is the weighting factor close to 1, but smaller than 1,e *(i, n) is the complex conjugate of e (i, n), and the error e (i, n) is given as
- y N(i) is the data input vector at time i