Ngô Quốc Anh

June 17, 2009

Linear Least Square Problem and QR Decomposition

In linear algebra, a QR decomposition (also called a QR factorization) of a matrix is a decomposition of the matrix into an orthogonal and a right triangular matrix. QR decomposition is often used to solve the linear least squares problem, and is the basis for a particular eigenvalue algorithm, the QR algorithm.

Square matrix

A QR decomposition of a real square matrix A is a decomposition of A as A = QR, where Q is an orthogonal matrix (meaning that Q^TQ = I ) and R is an upper triangular matrix (also called right triangular matrix). This generalizes to a complex square matrix A and a unitary matrix Q. If A is nonsingular, then this factorization is unique if we require that the diagonal elements of R are positive.

Rectangular matrix

More generally, we can factor a complex m \times n matrix A, with m > n, as the product of an m \times m unitary matrix Q and an m\times n upper triangular matrix R. As the bottom m-n rows of an m \times n upper triangular matrix consist entirely of zeroes, it is often useful to partition R, or both R and Q:

A = QR = Q \begin{pmatrix} R_1 \\ 0 \end{pmatrix} = \begin{pmatrix} Q_1, Q_2 \end{pmatrix} \begin{pmatrix} R_1 \\ 0 \end{pmat...

where R_1 is an n\times n upper triangular matrix, Q_1 is m\times n, Q_2 is m\times (m-n), and Q_1 and Q_2 both have orthogonal columns.

Golub & Van Loan call Q_1R_1 the thin QR factorization of A. If A is of full rank n and we require that the diagonal elements of R_1 are positive then R_1 and Q_1 are unique, but in general Q_2 is not. R_1 is then equal to the upper triangular factor of the Cholesky decomposition of A^\star A (= A^TA if A is real).

QL, RQ and LQ decompositions

Analogously, we can define QL, RQ, and LQ decompositions, with L being a lower triangular matrix.

How to solve the Linear Least Square problems

Having the QR decomposition, that is, A=QR, one can solve the Least Square Problem Ax=b where A a m \times n matrix, x \in \mathbb R^n and b \in \mathbb R^m with m >n. To be precise, by mean of A=QR, we see that Q is an m \times m matrix and R is an m \times n matrix. Therefore, we can assume both Q and R to be

Q = \left( {\begin{array}{*{20}{c}}    {{{\left[ {{Q_1}} \right]}_{m \times n}}} & {{{\left[ {{Q_2}} \right]}_{m \times \...

Now equation Ax=b is equivalent to Q_1R_1x =b. Thus x=R_1^{-1}Q_1^Tb.

Computing the QR decomposition

There are several methods for actually computing the QR decomposition, such as by means of the Gram–Schmidt process, Householder transformations, or Givens rotations. Each has a number of advantages and disadvantages. In this section, we mainly consider the Gram–Schmidt process. More precisely, consider the Gram–Schmidt process, with the vectors to be considered in the process as the columns of the matrix A=(\mathbf{a}_1| \cdots|\mathbf{a}_n). Assume that after finishing the Gram–Schmidt process, we obtain (\mathbf{e}_1| \cdots|\mathbf{e}_n). Therefore, we put

\begin{matrix}Q\end{matrix} = (\mathbf{e}_1|\cdots |\mathbf{e}_n)

and thus,

R=Q^TA.

Example

Let us consider the case when

A = \left( {\begin{array}{*{20}{c}}    2 & 4 \\    1 & 3  \\    0 & 0 \\    0 & 0  \\   \end{array} } \right)...

Then

\mathbf a_1 = \left( {\begin{array}{*{20}{c}}    2  \\    1  \\    0  \\    0  \\   \end{array} } \right), \quad \mathbf a_2 ...

From \mathbf a_1, we obtain \mathbf e_1 from \mathbf e_1 : = \frac{\mathbf a_1}{\|\mathbf a_1\|}, that is,

\mathbf e_1 = \left( {\begin{array}{*{20}{c}}    {\frac{2} {{\sqrt 5 }}}  \\    {\frac{1} {{\sqrt 5 }}}  \\    0  \\    0  \\...

To obtain \mathbf e_2, we firstly project \mathbf a_2 onto \mathbf e_1, then we subtract this result from \mathbf a_2. More precisely,

{\mathbf e_2} = \frac{\mathbf a_2 - {\rm proj}_{\mathbf e_1}{\mathbf a_2}} {{\left\| {{\mathbf a_2} - {\rm proj}_{\mathbf e_1...

where

{\rm proj}_{\mathbf e_1}{\mathbf a_2} = \frac{{\left\langle {{\mathbf e_1},{\mathbf a_2}} \right\rangle }} {{\left\langle {{\...

which yields

{\mathbf e_2} = \left( {\begin{array}{*{20}{c}}    {\frac{{ - 1}} {{\sqrt 5 }}}  \\    {\frac{2} {{\sqrt 5 }}}  \\    0  \\  ...

Thus

Q = \left( {\begin{array}{*{20}{c}}    {\frac{2} {{\sqrt 5 }}} & { - \frac{1} {{\sqrt 5 }}} & 0 & 0  \\    {\frac...

and

R = {Q^T}A = \left( {\begin{array}{*{20}{c}}    {\frac{2} {{\sqrt 5 }}} & {\frac{1} {{\sqrt 5 }}} & 0 & 0  \\    ...

Finally,

\underbrace {\left( {\begin{array}{*{20}{c}}    2 & 4  \\    1 & 3  \\    0 & 0  \\    0 & 0  \\   \end{array...

In Maple, you can practice as shown below.

QR-Maple

2 Comments »

  1. I like your computation with Maple😀
    A maplet will be useful for summarizing your works in only an aplication.

    I’m also writing a paper about numerical method for eigenvalues problem. QR factorization plays an important roll in QR method.

    Comment by Tuan Minh — June 18, 2009 @ 19:09

  2. Thanks Minh, since I am not working in Applied Mathematics, so there may be some mistakes in my posts. Please let me know if applicable.

    Comment by Ngô Quốc Anh — June 18, 2009 @ 19:22


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: