$Post :=
If[MatrixQ[#1],
MatrixForm[#1], #1] & (* outputs matricies in MatrixForm*)
Remove[ "Global`*"] // Quiet (* remove all variables *)
This entire web page, including all Wolfram language code, is available for download at this link.
We studied in Chapter 1 (see sections on transformations and rotations) some linear transformations in ℝ². In this section, we show that any 2× 2 matrix (or any linear transformation on the plane) can be decomposed into product of three matrices: rotation, scaling, and reflection.
2D Decompositions
We know that any rotation around the origin by angle θ is performed by matrix multiplication from left
The main advantage of writing a rotation matrix in this form is that their composition can be evaluated by simple shift operation (without matrix multiplication):
\begin{equation} \label{Eq2D.2}
\left[ \mathbf{S} \right] = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} .
\end{equation}
Linear maps can reflect objects across the x-axis T(x, y) = (x, −y) or about the y-axis T(x, y) = (−x, y) with matrix multiplication (from left) given, respectively,
where I is the identity matrix, v • n is dot product of two vectors, and the reflection line has unit normal vector n = (n₁, n₂). Mathematica has a build-in command: ReflectionMatrix[a].
We present some examples of matrices.
refL[n_] := IdentityMatrix[2] - 2 Outer[Times, n, n](*Eq(4) above*)
refL[{1/Sqrt[2], 1/Sqrt[2]}]
R\[Theta][\[Theta]_] := {{Cos[\[Theta]], -Sin[\[Theta]]}, {Sin[\
\[Theta]], Cos[\[Theta]]}};
S[a_, b_] := {{a, 0}, {0, b}};
refX = {{1, 0}, {0, -1}};
refY = {{-1, 0}, {0, 1}};
refL[n1_, n2_] :=
IdentityMatrix[2] - 2 {{n1, n2}}\[Transpose] . {{n1, n2}};
theta = Pi/4; (* 45 degrees rotation *)
a = 2; b = 3; (*Note new scaling factors *)
n = {1, 1}/Sqrt[2]; (* normal vector for reflection *)
combinedTransformation =
refL[n[[1]], n[[2]]] . S[a, b] .
R\[Theta][
theta];(*This is Equation (4) above. Note that order matters*)
Before we work out the general case of 2 × 2 matrices, we show in the following example that some of these three linear transformations (rotation, scaling, and reflection) may not commute.
Example 1:
Using Mathematica, we verify the commutation laws for each of these basic three transformations, (1) -- (4). We start with rotation matrices:
R = {{Cos[\[Theta]], - Sin[\[Theta]]}, {Sin[\[Theta]], Cos[\[Theta]]}}
Since refX.refY = refY.refX = −I, these two basic reflection matrices
commute
.
TrueQ[refX . refY == -IdentityMatrix[2]]
True
What about arbitrary reflections? To answer this question, we choose two noncolinear unit vectors n = (n₁, n₂) and m = (m₁, m₂) that are orthogonal to two straight lines. Assuming that these unit vectors are written in column form (n, m ∈ ℝ2×1), the corresponding reflection matrices with respect to these two lines can be written in succinct form
\[
\mathbf{R}_n = \mathbf{I} - 2\,\mathbf{n}\cdot \mathbf{n}^{\mathrm T} \qquad \mbox{and} \qquad \mathbf{R}_m = \mathbf{I} - 2\,\mathbf{m}\cdot \mathbf{m}^{\mathrm T} .
\]
Their product is
\[
\mathbf{R}_n \mathbf{R}_m = \mathbf{I} - 2\,\mathbf{n}\cdot \mathbf{n}^{\mathrm T} - 2\,\mathbf{m}\cdot \mathbf{m}^{\mathrm T} + 4\, \mathbf{n}\cdot \mathbf{n}^{\mathrm T} \,\mathbf{m}\cdot \mathbf{m}^{\mathrm T} .
\]
From this formula, it follows that reflection matrices Rn and Rm commute if and only if
\[
\left( \mathbf{n}\cdot \mathbf{n}^{\mathrm T} \right) \left( \mathbf{m}\cdot \mathbf{m}^{\mathrm T} \right) = \left( \mathbf{m}\cdot \mathbf{m}^{\mathrm T} \right) \left( \mathbf{n}\cdot \mathbf{n}^{\mathrm T} \right) .
\tag{1.1}
\]
We rewrite Eq.(1.1) in coordinate form:
\[
\begin{bmatrix} n_1^2 & n_1 n_2 \\ n_2 n_1 & n_2^2 \end{bmatrix} \cdot \begin{bmatrix} m_1^2 & m_1 m_2 \\ n_2 m_1 & m_2^2 \end{bmatrix} = \begin{bmatrix} m_1^2 & m_1 m_2 \\ n_2 m_1 & m_2^2 \end{bmatrix} \cdot \begin{bmatrix} n_1^2 & n_1n_2 \\ n_2 n_1 & n_2^2 \end{bmatrix} .
\tag{1.2}
\]
Generally speaking, Eq.(1.2) does not hold and we conclude that reflection matrices
do not commute
unless corresponding lines are orthogonal. For instance, let us choose two lines y = x√3 and y = −x√3. Then these lines have the corresponding normal vectors:
\[
\mathbf{n} = \frac{1}{2} \begin{pmatrix} -1 \\ \sqrt{3} \end{pmatrix} , \qquad \mathbf{m} = \frac{1}{2} \begin{pmatrix} 1 \\ \sqrt{3} \end{pmatrix}
\]
Then (upon dropping multiple ¼) their Householder matrices become
\[
\mathbf{A} = \mathbf{n} \,\mathbf{n}^{\mathrm T} = \frac{1}{4} \begin{pmatrix} 1 & -\sqrt{3} \\ -\sqrt{3} & 3 \end{pmatrix} ,
\]
and
\[
\mathbf{B} = \mathbf{m} \,\mathbf{m}^{\mathrm T} = \frac{1}{4} \begin{pmatrix} 1 & \sqrt{3} \\ \sqrt{3} & 3 \end{pmatrix} .
\]
Mathematica evaluates their commutator [A, B] = AB − BA to be
A = {{1, -Sqrt[3]}, {-Sqrt[3], 3}};
B = {{1, Sqrt[3]}, {Sqrt[3], 3}};
A . B - B . A
{{0, -4 Sqrt[3]}, {4 Sqrt[3], 0}}
Since for our matrices
\[
\begin{pmatrix} 1 & -\sqrt{3} \\ -\sqrt{3} & 3 \end{pmatrix} \begin{pmatrix} 1 & \sqrt{3} \\ \sqrt{3} & 3 \end{pmatrix}
- \begin{pmatrix} 1 & \sqrt{3} \\ \sqrt{3} & 3 \end{pmatrix} \begin{pmatrix} 1 & -\sqrt{3} \\ -\sqrt{3} & 3 \end{pmatrix}
= \begin{pmatrix} 0 & - 4\sqrt{3} \\ 4\sqrt{3} & 0 \end{pmatrix} ,
\]
we conclude that reflection matrices with respect to lines having normal vectors n and m
do not commute
.
Now we multiply basic reflection matrices by scaling matrices
where \( \displaystyle \quad \| \mathbf{v} \| = + \sqrt{a^2 + c^2} , \quad \| \mathbf{u} \| = + \sqrt{b^2 + d^2} \quad \) are Euclidean norms of vectors v and u, respectively. As usual, we choose positive branch of square root analytic function for definition of norms.
So each column of A is treated as a point in the xy-plane at distances ∥v∥ and ∥u∥ from the origin with
angles θ and ϕ measured from the positive (counterclockwise) x-axis, respectively.
Hence, it appears that we have rewritten our original matrix A in a more complex
form; however, the point was to decompose the matrix, of which we now have acomplished the first step:
The second matrix on the the right-hand side of this equation is a rescaling, but the first multiple is not a
rotation. So we need to rewrite the first matrix as a product of rotations and rescaling. To do this, we use the
substitution ψ = ½(θ + ϕ) and χ = ½(θ − ϕ). The reverse transformations are θ = ψ + χ and ϕ = ψ − χ . Hence, we can write
The first matrix on the last line is just a simple rotation by angle ψ; however, the second is not in the
standard form of a rotation. So we now turn our attention to the second matrix above. Note that
where the first matrix on the right-hand side of latter equation is simply a rescaling by the value of
cos(ψ) in the x-direction and sin(ψ) in the y-direction. The right matrix, however, is not in the
form of one of the three types of discussed matrices, but we can decompose it as follows:
So the matrix A can be written as a product of two rotations and two general rescalings, where
general rescalings include the reflections about the x-axis, y-axis, and the origin, as well as
normal positive rescalings.
where \( \displaystyle \quad \| \mathbf{v}\| = +\sqrt{a^2 + c^2} , \quad \| \mathbf{u}\| = +\sqrt{b^2 + d^2} . \quad \) The angles ψ = ½(θ + ϕ) and χ = ½(θ − ϕ) are expressed through polar angles of v and u as tanθ = c/𝑎 and tanϕ = d/b.
Example 2:
Let us perform this decomposition with
\[
{\bf A} = \begin{bmatrix} 5& 2 \\ 3& 1 \end{bmatrix} .
\]
We extract its column vectors
Now we see directly that our decomposition of a linear map into a product of rotations and
general rescalings works. In particular, we can write A as A = A₁A₂A₃A₄ with
\begin{align*}
\mathbf{A}_1 &= \begin{bmatrix} \cos\psi & - \sin\psi \\ \sin\psi & \cos\psi \end{bmatrix} \approx \begin{bmatrix} 0.876606 & -0.481209 \\ 0.481209 & 0.876606 \end{bmatrix} ,
\\
\mathbf{A}_2 &= \begin{bmatrix} \sqrt{2}\cos \chi & 0 \\ 0 & \sqrt{2}\sin\chi \end{bmatrix} \approx \begin{bmatrix} 1.41317 & 0 \\ 0 & 0.0542726 \end{bmatrix} ,
\\
\mathbf{A}_3 &= \begin{bmatrix} \cos \left( \frac{\pi}{4} \right) & -\sin \left( \frac{\pi}{4} \right) \\ \cos \left( \frac{\pi}{4} \right) & \sin \left( \frac{\pi}{4} \right) \end{bmatrix} \approx \begin{bmatrix} 0.707107 & - 0.707107 \\ 0.707107 & 0.707107 \end{bmatrix} ,
\\
\mathbf{A}_4 &= \begin{bmatrix} \sqrt{34} & 0 \\ 0 & -\sqrt{5} \end{bmatrix} \approx \begin{bmatrix} 5.83095 & 0 \\ 0 & 2.23607 \end{bmatrix} .
\end{align*}
This decomposition cannot be reordered, as you are asled to verify in Ecercise 2. We check with Mathematica decomposition of the given matrix A into product of four matrices.
In Example 2, it was shown that the given matrix A can be decomposed into prooduct of four matrices AA₁A₂A₃A₄. Determine whether any of the following reorderings agree with the decomposition of matrix A.
where j is the imaginary unit vector on complex plane ℂ, so j ² = −1. Decompose the real part and the imaginary part of matrix A according to formula \eqref{Eq2D.5} .