Recursive Least Squares for Real-Time Implementation
Zhao Zhao

Many estimation and control problems involve a process of the form where is the discrete-time step corresponding to the continuous-time step size , the scalar or vector is the measurement at step , the matrix is the regressor at step whose entries consist of current and past data, and is a column vector of unknown parameters. The objective is to use and to estimate the components of . In applications, and are corrupted by noise, and thus it does not hold exactly. This motivates the need for the least squares estimates of given below.

The measurements and the data in are typically obtained from a continuous-time process and, as such, are available at the sample times , where is the sample interval. The batch approach to this problem is to collect a large amount of data and then apply least squares optimization to the collected data to compute an estimate of . In particular, collecting data over the time window , it follows that Note that the above equation has the form , where denotes , denotes , and denotes .

In the presence of noise corrupting the data and , it may not have a solution. In this case, it is useful to replace it by a least squares optimization problem of the form: where is a positive semidefinite (and thus, by definition, symmetric) matrix, and is an initial estimate of . In particular, the batch least squares (BLS) is given by Note that the inverse is of size , and thus the computational requirement of the inverse is of order . In addition to the inverse, three matrix multiplications are needed. Note also that the memory needed to store grows with . Furthermore, if has full column rank, then can be set to zero, and it becomes In many applications, computational speed and memory are limited. One way to alleviate these requirements is to recursively update an estimate of using each additional measurement . A recursive algorithm of this type is especially convenient for real-time applications.

Recursive least squares (RLS) is a technique used for minimizing a quadratic cost function, where the minimizer is updated at each step as new data become available. RLS is more computationally efficient than batch least squares, and it is extensively used for system identification and adaptive control. This article derives RLS and emphasizes its real-time implementation in terms of the availability of the data as well as the time needed for the computation.

RECURSIVE LEAST SQUARES

This section provides a statement and proof of the RLS algorithm. This result involves a recursive algorithm for optimizing at each step . The optimization of updates the estimate of as measurements and data become available. We include a forgetting factor , which provides higher weighting to more recent measurements and data.

Theorem 1:

For all , let , and . Furthermore, let be positive definite, and . For all , denote the minimizer of the function by Then, for all , is given by

Three Useful Lemmas:

The following result is the quadratic minimization lemma:

LEMMA 1 :

Let , assume that is positive definite, let and , and define by Then the unique minimizer of is and the minimum value of is The following result is the matrix inversion lemma:

LEMMA 2 :

Let , and assume that , and are nonsingular. Then LEMMA 3 :

Let be positive semidefinite, let , and let be positive definite. Then,

Proof :

Note that can be written as Defining With LEMMA 2, we have Additionally, because is positive definite, it follows from Lemma 1 that the unique minimizer is given by

Hence, it is satisfied for .

Now, let . Then can be written as Furthermore, and can be written recursively as Defining: it follows from Lemma 2 with we have Furthermore, the minimizer of is given by Because is positive definite, it follows from Lemma 1 that Hence, Theorem 1 is satisfied.

Note that, if , is positive definite, and , then the RLS minimizer is equal to the BLS minimizer .

The notation for the minimizer of emphasizes the fact that , which is based on data available up to step , is not available until the update given by Theorem 1 is completed, which occurs at step .

The computational requirements of RLS are primarily determined by . In particular, is , and thus the computational requirement for updating given by (9) is of order n2 . Furthermore, the inverse in Theorems 1 is of size , which, since typically , is much less demanding than the inverse required by BLS. In addition, the storage requirements of RLS are of order , which does not grow with . Consequently, the computational and memory requirements of RLS are significantly less than those of BLS.

ALTERNATIVE UPDATE WITH INVERSE

The following result is a variation of Theorem 1. In this formulation, the updates of and are reversed.

Theorem 2:

For all , let , and . Furthermore, let be positive definite, and . For all , denote the minimizer of the function by Then, for all , is given by

Proof :

Using in Theorem 1, we have

REAL-TIME IMPLEMENTATION OF RECURSIVE LEAST SQUARES

In many applications, it is desirable to implement RLS so that the estimate is available in real time without latency. Note that the estimate depends on measurements available up to and including step , namely, and . Since time is needed to compute , the updated estimate is not available at time ; rather, it is available at the next step, namely, . Consequently, the minimizer of is denoted by , where the subscript conveys the fact that the minimizer of is not available until step . Figure 1 shows how the measurements and data that are available at step are used during the time interval to compute the next estimate .