es
$Post := If[MatrixQ[#1], MatrixForm[#1], #1] & (* outputs matricies in MatrixForm*) Remove[ "Global`*"] // Quiet (* remove all variables *)
This entire web page, including all Wolfram language code, is available for download at this link.

We studied in Chapter 1 (see sections on transformations and rotations) some linear transformations in ℝ². In this section, we show that any 2× 2 matrix (or any linear transformation on the plane) can be decomposed into product of three matrices: rotation, scaling, and reflection.

2D Decompositions

We know that any rotation around the origin by angle θ is performed by matrix multiplication from left
\begin{equation} \label{Eq2D.1} \left[ \mathbf{R}_{\theta} \right] = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \phantom{-}\cos\theta \end{bmatrix} . \end{equation}
We start by defining a rotation matrix for an angle theta
Clear[a, b, \[Theta], \[CurlyPhi], n];
R\[Theta][\[Theta]_] := {{Cos[\[Theta]], -Sin[\[Theta]]}, {Sin[\ \[Theta]], Cos[\[Theta]]}}; R\[Theta][\[Theta]]
The main advantage of writing a rotation matrix in this form is that their composition can be evaluated by simple shift operation (without matrix multiplication):
\[ \left[ \mathbf{R}_{\theta} \right] \left[ \mathbf{R}_{\phi} \right] = \left[ \mathbf{R}_{\phi} \right] \left[ \mathbf{R}_{\theta} \right] = \left[ \mathbf{R}_{\theta + \phi} \right] = \begin{bmatrix} \cos (\theta + \phi ) & -\sin (\theta + \phi ) \\ \sin (\theta + \phi ) & \phantom{-}\cos (\theta + \phi ) \end{bmatrix} . \]
Nonuniform scaling is managed by the matrix
\begin{equation} \label{Eq2D.2} \left[ \mathbf{S} \right] = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} . \end{equation}
Linear maps can reflect objects across the x-axis T(x, y) = (x, −y) or about the y-axis T(x, y) = (−x, y) with matrix multiplication (from left) given, respectively,
\begin{equation} \label{Eq2D.3} \left[ \mathbf{refX} \right] = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} \quad\mbox{and} \quad \left[ \mathbf{refY} \right] = \begin{bmatrix} -1 & 0 \\ \phantom{-}0 & 1 \end{bmatrix} . \end{equation}
In general, reflection with respect to line L having a unit normal vector n, is obtained by the Householder reflection formula:
\begin{equation} \label{Eq2D.4} \mathbf{refL} \left( \mathbf{v} \right) = \mathbf{v} - 2\mathbf{n} \left( \mathbf{v} \bullet \mathbf{n} \right) \quad \Longrightarrow \quad \left[ \mathbf{refL} \right] = \mathbf{I} - 2 \begin{bmatrix} n_1^2 & n_1 n_2 \\ n_1 n_2 & n_2^2 \end{bmatrix} , \end{equation}
where I is the identity matrix, vn is dot product of two vectors, and the reflection line has unit normal vector n = (n₁, n₂). Mathematica has a build-in command: ReflectionMatrix[a].

We present some examples of matrices.

refL[n_] := IdentityMatrix[2] - 2 Outer[Times, n, n](*Eq(4) above*)
refL[{1/Sqrt[2], 1/Sqrt[2]}]
R\[Theta][\[Theta]_] := {{Cos[\[Theta]], -Sin[\[Theta]]}, {Sin[\ \[Theta]], Cos[\[Theta]]}}; S[a_, b_] := {{a, 0}, {0, b}}; refX = {{1, 0}, {0, -1}}; refY = {{-1, 0}, {0, 1}}; refL[n1_, n2_] := IdentityMatrix[2] - 2 {{n1, n2}}\[Transpose] . {{n1, n2}}; theta = Pi/4; (* 45 degrees rotation *) a = 2; b = 3; (*Note new scaling factors *) n = {1, 1}/Sqrt[2]; (* normal vector for reflection *) combinedTransformation = refL[n[[1]], n[[2]]] . S[a, b] . R\[Theta][ theta];(*This is Equation (4) above. Note that order matters*)
rotMat = R\[Theta][theta]; scaleMat = S[a, b]; refMatX = refX; refMatY = refY; householderMat = refL[n]; Grid[{ {"Rotation Matrix", rotMat}, {"Scaling Matrix", scaleMat}, {"Reflection Matrix X", refMatX}, {"Reflection Matrix Y", refMatY}, {"Householder Reflection Matrix", householderMat}, {"Combined Transformation", combinedTransformation} }, Frame -> All]
Rotation matrix \( \displaystyle \quad \left\{ \left\{ \frac{1}{\sqrt{2}} , \ -\frac{1}{\sqrt{2}} \right\} , \left\{ \frac{1}{\sqrt{2}} , \ \frac{1}{\sqrt{2}} \right\} \right\} \)
Scaling Matrix \( \displaystyle \quad \left\{ \left\{ 2, \ 0 \right\} ,\ \left\{ 0,\ 3 \right\} \right\} \)
Reflection Matrix X \( \displaystyle \quad \left\{ \left\{ 1,\ 0 \right\} ,\ \left\{ 0,\ -1 \right\} \right\} \)
Reflection Matrix Y \( \displaystyle \quad \left\{ \left\{ -1,\ 0 \right\} , \ \left\{0, \ 1 \right\} \right\} \)
Householder Reflection Matrix \( \displaystyle \quad \left\{ \left\{ 0, \ -1 \right\} , \ \left\{ -1, \ 0 \right\} \right\} \)
Combined Transformation \( \displaystyle \quad \left\{ \left\{ -\frac{3}{\sqrt{2}} , \ -\frac{3}{\sqrt{2}} \right\} , \ \left\{ -\sqrt{2}, \ \sqrt{2} \right\} \right\} \)

Before we work out the general case of 2 × 2 matrices, we show in the following example that some of these three linear transformations (rotation, scaling, and reflection) may not commute.    

Example 1: Using Mathematica, we verify the commutation laws for each of these basic three transformations, (1) -- (4). We start with rotation matrices:

R = {{Cos[\[Theta]], - Sin[\[Theta]]}, {Sin[\[Theta]], Cos[\[Theta]]}}
Multiplying two rotation matrices, we get
TrigFactor[R . (R /. \[Theta] -> \[Phi])]
\( \displaystyle \quad \begin{pmatrix} \cos (\theta + \phi ) & -\sin (\theta + \phi ) \\ \sin (\theta + \phi ) & \phantom{-}\cos (\theta + \phi ) \end{pmatrix} \)
So we see that rotation matrices commute . Now we define a scaling matrix
S = {{a,0}, {0,b}}
Since S is a diagonal matrix, we conclude that scaling matrices also commute .     Now we check products of rotation and scaling matrices:
S = {{a, 0}, {0, b}}
R.S
\( \displaystyle \quad \begin{pmatrix} a\,\cos \theta & - b\,\sin\theta \\ a\,\sin\theta & b\,\cos\theta \end{pmatrix} \)
and
S.R
\( \displaystyle \quad \begin{pmatrix} a\,\cos \theta & - a \,\sin\theta \\ b\,\sin\theta & b\,\cos\theta \end{pmatrix} \)
From the above two matrix multiplications, we see that rotations and rescalings do not commute because products S R and R S are not equal.

We consider two basic reflection matrices

Clear[refX, refY];
refX = {{1, 0}, {0, -1}};
refY = {{-1, 0}, {0, 1}};
Their products are the same, as Mathematicaconfirms
refX . refY
\( \displaystyle \quad \begin{pmatrix} -1&0 \\ 0 & -1 \end{pmatrix} \)
refY . refX
\( \displaystyle \quad \begin{pmatrix} -1&0 \\ 0 & -1 \end{pmatrix} \)
Since refX.refY = refY.refX = −I, these two basic reflection matrices commute .
TrueQ[refX . refY == -IdentityMatrix[2]]
True
What about arbitrary reflections? To answer this question, we choose two noncolinear unit vectors n = (n₁, n₂) and m = (m₁, m₂) that are orthogonal to two straight lines. Assuming that these unit vectors are written in column form (n, m ∈ ℝ2×1), the corresponding reflection matrices with respect to these two lines can be written in succinct form \[ \mathbf{R}_n = \mathbf{I} - 2\,\mathbf{n}\cdot \mathbf{n}^{\mathrm T} \qquad \mbox{and} \qquad \mathbf{R}_m = \mathbf{I} - 2\,\mathbf{m}\cdot \mathbf{m}^{\mathrm T} . \] Their product is \[ \mathbf{R}_n \mathbf{R}_m = \mathbf{I} - 2\,\mathbf{n}\cdot \mathbf{n}^{\mathrm T} - 2\,\mathbf{m}\cdot \mathbf{m}^{\mathrm T} + 4\, \mathbf{n}\cdot \mathbf{n}^{\mathrm T} \,\mathbf{m}\cdot \mathbf{m}^{\mathrm T} . \] From this formula, it follows that reflection matrices Rn and Rm commute if and only if \[ \left( \mathbf{n}\cdot \mathbf{n}^{\mathrm T} \right) \left( \mathbf{m}\cdot \mathbf{m}^{\mathrm T} \right) = \left( \mathbf{m}\cdot \mathbf{m}^{\mathrm T} \right) \left( \mathbf{n}\cdot \mathbf{n}^{\mathrm T} \right) . \tag{1.1} \] We rewrite Eq.(1.1) in coordinate form: \[ \begin{bmatrix} n_1^2 & n_1 n_2 \\ n_2 n_1 & n_2^2 \end{bmatrix} \cdot \begin{bmatrix} m_1^2 & m_1 m_2 \\ n_2 m_1 & m_2^2 \end{bmatrix} = \begin{bmatrix} m_1^2 & m_1 m_2 \\ n_2 m_1 & m_2^2 \end{bmatrix} \cdot \begin{bmatrix} n_1^2 & n_1n_2 \\ n_2 n_1 & n_2^2 \end{bmatrix} . \tag{1.2} \] Generally speaking, Eq.(1.2) does not hold and we conclude that reflection matrices do not commute unless corresponding lines are orthogonal. For instance, let us choose two lines y = x√3 and y = −x√3. Then these lines have the corresponding normal vectors: \[ \mathbf{n} = \frac{1}{2} \begin{pmatrix} -1 \\ \sqrt{3} \end{pmatrix} , \qquad \mathbf{m} = \frac{1}{2} \begin{pmatrix} 1 \\ \sqrt{3} \end{pmatrix} \] Then (upon dropping multiple ¼) their Householder matrices become \[ \mathbf{A} = \mathbf{n} \,\mathbf{n}^{\mathrm T} = \frac{1}{4} \begin{pmatrix} 1 & -\sqrt{3} \\ -\sqrt{3} & 3 \end{pmatrix} , \] and \[ \mathbf{B} = \mathbf{m} \,\mathbf{m}^{\mathrm T} = \frac{1}{4} \begin{pmatrix} 1 & \sqrt{3} \\ \sqrt{3} & 3 \end{pmatrix} . \] Mathematica evaluates their commutator [A, B] = ABBA to be
A = {{1, -Sqrt[3]}, {-Sqrt[3], 3}}; B = {{1, Sqrt[3]}, {Sqrt[3], 3}}; A . B - B . A
{{0, -4 Sqrt[3]}, {4 Sqrt[3], 0}}
Since for our matrices \[ \begin{pmatrix} 1 & -\sqrt{3} \\ -\sqrt{3} & 3 \end{pmatrix} \begin{pmatrix} 1 & \sqrt{3} \\ \sqrt{3} & 3 \end{pmatrix} - \begin{pmatrix} 1 & \sqrt{3} \\ \sqrt{3} & 3 \end{pmatrix} \begin{pmatrix} 1 & -\sqrt{3} \\ -\sqrt{3} & 3 \end{pmatrix} = \begin{pmatrix} 0 & - 4\sqrt{3} \\ 4\sqrt{3} & 0 \end{pmatrix} , \] we conclude that reflection matrices with respect to lines having normal vectors n and m do not commute .

Now we multiply basic reflection matrices by scaling matrices

S.refX
\( \displaystyle \quad \begin{pmatrix} a & 0 \\ 0 & -b \end{pmatrix} \)
and
refX.S
\( \displaystyle \quad \begin{pmatrix} a & 0 \\ 0 & -b \end{pmatrix} \)
So scaling and reflection with respect to x axes commute . Similar conclusion is valid for reflection with respect to another axes

On the other hand, products of reflection matrices and the rotation one show different output:

refX.R
\( \displaystyle \quad \begin{pmatrix} \cos\theta & -\sin\theta \\ -\sin\theta & -\cos\theta \end{pmatrix} \)
and
R.refX
\( \displaystyle \quad \begin{pmatrix} \cos\theta & \sin\theta \\ \sin\theta & -\cos\theta \end{pmatrix} \)
Similarly,
refY.R
\( \displaystyle \quad \begin{pmatrix} -\cos\theta & \sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \)
and
R.refY
\( \displaystyle \quad \begin{pmatrix} -\cos\theta & -\sin\theta \\ -\sin\theta & cos\theta \end{pmatrix} \)
Therefore, we conclude that rotation and reflection matrices do not commute .    ■
End of Example 1
    Using the above basic matrices, we now attempt to decompose an arbitrary linear map given in matrix form by
\[ \mathbf{A} = \begin{bmatrix} a& b \\ c&d \end{bmatrix} . \]
The first step is to recognize that the matrix A can be expressed as a list of column vectors:
\[ \mathbf{A} = \begin{bmatrix} \mathbf{v} & \mathbf{u} \end{bmatrix} , \qquad \mbox{with} \qquad \mathbf{v} = \begin{bmatrix} a \\ c \end{bmatrix} , \quad \mathbf{u} = \begin{bmatrix} b \\ d \end{bmatrix} . \]
Clear[A, B, v, u, a, b, c, d];
v = {{a, c}}; u = {{b, d}};
A = Transpose[{v[[1]], u[[1]]}]
\( \displaystyle \quad \begin{pmatrix} a & b \\ c & d \end{pmatrix} \)
Then writing column vectors v, u in polar coordinates, we get
\[ \mathbf{A} = \begin{bmatrix} a& b \\ c&d \end{bmatrix} = \begin{bmatrix} \| \mathbf{v} \|\,\cos\theta & \| \mathbf{u} \|\,\cos\phi \\ \| \mathbf{v} \|\,\sin\theta & \| \mathbf{u} \|\,\sin\phi \end{bmatrix} , \]
A' = { {Sqrt[a^2 + c^2] Cos[\[Theta]], Sqrt[b^2 + d^2] Cos[\[Phi]]}, {Sqrt[a^2 + c^2] Sin[\[Theta]], Sqrt[b^2 + d^2] Sin[\[Phi]]} };
TrueQ[A == A']
False
A'' = {{Cos[\[Theta]], Cos[\[Phi]]}, {Sin[\[Theta]], Sin[\[Phi]]}} . {{Sqrt[a^2 + c^2], 0}, {0, Sqrt[b^2 + d^2]}};
TrueQ[A' == A'']
True
where \( \displaystyle \quad \| \mathbf{v} \| = + \sqrt{a^2 + c^2} , \quad \| \mathbf{u} \| = + \sqrt{b^2 + d^2} \quad \) are Euclidean norms of vectors v and u, respectively. As usual, we choose positive branch of square root analytic function for definition of norms. So each column of A is treated as a point in the xy-plane at distances ∥v∥ and ∥u∥ from the origin with angles θ and ϕ measured from the positive (counterclockwise) x-axis, respectively. Hence, it appears that we have rewritten our original matrix A in a more complex form; however, the point was to decompose the matrix, of which we now have acomplished the first step:
\[ \mathbf{A} = \begin{bmatrix} \| \mathbf{v} \|\,\cos\theta & \| \mathbf{u} \|\,\cos\phi \\ \| \mathbf{v} \|\,\sin\theta & \| \mathbf{u} \|\,\sin\phi \end{bmatrix} = \begin{bmatrix} \cos\theta & \cos\phi \\ \sin\theta & \sin\phi \end{bmatrix} \cdot \begin{bmatrix} \| \mathbf{v} \| & 0 \\ 0 & \| \mathbf{u} \| \end{bmatrix} . \]
The second matrix on the the right-hand side of this equation is a rescaling, but the first multiple is not a rotation. So we need to rewrite the first matrix as a product of rotations and rescaling. To do this, we use the substitution     ψ = ½(θ + ϕ)   and   χ = ½(θ − ϕ). The reverse transformations are θ = ψ + χ   and   ϕ = ψ − χ  . Hence, we can write
\[ \begin{bmatrix} \cos\theta & \cos\phi \\ \sin\theta & \sin\phi \end{bmatrix} = \begin{bmatrix} \cos\left( \psi + \chi \right) & \cos\left( \psi - \chi \right) \\ \sin\left( \psi + \chi \right) & \sin\left( \psi - \chi \right) \end{bmatrix} . \]
Clear[\[Theta], \[Phi], \[Psi], \[Chi]] \[Psi] = 1/2 (\[Theta] + \[Phi]); \[Chi] = 1/2 (\[Theta] - \[Phi]); matrix = {{Cos[\[Theta]], Cos[\[Phi]]}, {Sin[\[Theta]], Sin[\[Phi]]}}; transformedMatrix = {{Cos[\[Psi] + \[Chi]], Cos[\[Psi] - \[Chi]]}, {Sin[\[Psi] + \[Chi]], Sin[\[Psi] - \[Chi]]}}; FullSimplify[transformedMatrix == matrix]
True
With the usual trigonometric identities for sum and difference angles
\[ \begin{split} \cos\left( \psi + \chi \right) &= \cos\psi\,\cos\chi - \sin\psi\,\sin\chi , \\ \sin \left( \psi + \chi \right) &= \sin\psi\,\cos\chi + \cos\psi\,\sin\chi , \end{split} \]
we can rewrite the matrix
\begin{align*} \mathbf{B} &= \begin{bmatrix} \cos\theta & \cos\phi \\ \sin\theta & \sin\phi \end{bmatrix} \\ &= \begin{bmatrix} \cos\psi\,\cos\chi - \sin\psi\,\sin\chi & \cos\psi\,\cos\chi + \sin\psi\,\sin\chi \\ \sin\psi\,\cos\chi + \cos\psi\,\sin\chi & \sin\psi\,\cos\chi - \cos\psi\,\sin\chi \end{bmatrix} \\ &= \begin{bmatrix} \cos\psi & - \sin \psi \\ \sin\psi & \cos\psi \end{bmatrix} \cdot \begin{bmatrix} \cos\chi & \cos\chi \\ \sin\chi & - \sin\chi \end{bmatrix} . \end{align*}
{{Cos[psi], -Sin[psi]}, {Sin[psi], Cos[psi]}}.{{Cos[chi],Cos[chi]}, {Sin[chi], -Sin[chi]}};
FullSimplify[%]
{{Cos[chi + psi], Cos[chi - psi]}, {Sin[chi + psi], -Sin[chi - psi]}}
The first matrix on the last line is just a simple rotation by angle ψ; however, the second is not in the standard form of a rotation. So we now turn our attention to the second matrix above. Note that
\[ \begin{bmatrix} \cos\chi & \cos\chi \\ \sin\chi & - \sin\chi \end{bmatrix} = \begin{bmatrix} \cos\chi & 0 \\ 0 & \sin\chi & \end{bmatrix} \cdot \begin{bmatrix} 1 & \phantom{-}1 \\ 1 & -1 \end{bmatrix} \]
{{Cos[chi], 0}, {0, Sin[chi]}}.{{1, 1}, {1, -1}}
{{Cos[chi], Cos[chi]}, {Sin[chi], -Sin[chi]}}
where the first matrix on the right-hand side of latter equation is simply a rescaling by the value of cos(ψ) in the x-direction and sin(ψ) in the y-direction. The right matrix, however, is not in the form of one of the three types of discussed matrices, but we can decompose it as follows:
\begin{align*} \begin{bmatrix} 1 & \phantom{-}1 \\ 1 & -1 \end{bmatrix} &= \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \\ &= \begin{bmatrix} \sqrt{2} & 0 \\ 0 & \sqrt{2} \end{bmatrix} \cdot \begin{bmatrix} \frac{1}{\sqrt{2}} & - \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \cdot \begin{bmatrix} 1& \phantom{-}0 \\ 0 & -1 \end{bmatrix} , \end{align*}
{{1, -1}, {1, 1}}.{{1, 0}, {0, -1}}
{{1, 1}, {1, -1}}
{{Sqrt[2], 0}, {0, Sqrt[2]}}.{{1/Sqrt[2], -1/Sqrt[2]}, {1/Sqrt[2], 1/Sqrt[2]}}.{{1, 0}, {0, -1}}
{{1, 1}, {1, -1}}
where
\[ \begin{bmatrix} \frac{1}{\sqrt{2}} & - \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} = \begin{bmatrix} \cos\left( \frac{\pi}{4} \right) & - \sin \left( \frac{\pi}{4} \right) \\ \sin \left( \frac{\pi}{4} \right) & \cos \left( \frac{\pi}{4} \right) \end{bmatrix} \]
is the rotation matrix by angle π/4. We can now represent our original matrix A in terms of rotations and rescalings:
\begin{align*} \mathbf{A} &= \begin{bmatrix} a & b \\ c & d \end{bmatrix} \\ &= \begin{bmatrix} \cos\psi & -\sin\psi \\ \sin\psi & \cos\psi\end{bmatrix} \cdot \begin{bmatrix} \sqrt{2}\,\cos\chi & 0 \\ 0 & \sqrt{2}\,\sin\chi \end{bmatrix} \\ &\quad \times \begin{bmatrix} \cos \left( \frac{\pi}{4} \right) & -\sin \left( \frac{\pi}{4} \right) \\ \sin \left( \frac{\pi}{4} \right) & \cos \left( \frac{\pi}{4} \right) \end{bmatrix} \cdot \begin{bmatrix} \| \mathbf{v}\| & 0 \\ 0 & -\| \mathbf{u}\| \end{bmatrix} \end{align*}
because
\[ \begin{bmatrix} \| \mathbf{v}\| & 0 \\ 0& -\| \mathbf{u}\| \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} \cdot \begin{bmatrix} \| \mathbf{v}\| &0 \\ 0 & \| \mathbf{v}\| \end{bmatrix} . \]
So the matrix A can be written as a product of two rotations and two general rescalings, where general rescalings include the reflections about the x-axis, y-axis, and the origin, as well as normal positive rescalings.
\begin{equation} \label{Eq2D.5} \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} \cos\psi & -\sin\psi \\ \sin\psi & \cos\psi \end{bmatrix} \cdot \begin{bmatrix} \sqrt{2}\,\cos\chi & 0 \\ 0 & \sqrt{2}\,\sin\chi \end{bmatrix} \cdot \begin{bmatrix} \frac{1}{\sqrt{2}} & - \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} \cdot \begin{bmatrix} \| \mathbf{v}\| &0 \\ 0&-\| \mathbf{u}\| \end{bmatrix} , \end{equation}
where \( \displaystyle \quad \| \mathbf{v}\| = +\sqrt{a^2 + c^2} , \quad \| \mathbf{u}\| = +\sqrt{b^2 + d^2} . \quad \) The angles ψ = ½(θ + ϕ) and χ = ½(θ − ϕ) are expressed through polar angles of v and u as tanθ = c/𝑎 and tanϕ = d/b.

   
Example 2: Let us perform this decomposition with \[ {\bf A} = \begin{bmatrix} 5& 2 \\ 3& 1 \end{bmatrix} . \] We extract its column vectors
col1 = {{5}, {3}}; col2 = {{2}, {1}};
Their (Euclidean) norms are
P = Norm[col1]
\( \displaystyle \quad \sqrt{34} \)
Q = Norm[col2]
\( \displaystyle \quad \sqrt{5} \)
Angles are approximately
theta = N[ArcTan[3/5]]
0.54042
and
phi = N[ArcTan[1/2]]
0.463648
We check whether our calculations are correct.
Sqrt[34]*{Cos[theta], Sin[theta]}
{5., 3.}
Sqrt[5]*{Cos[phi], Sin[phi]}
{2., 1.}
Now we define auxiliary angles
psi = (phi + theta)/2
0.502034
and
chi = (theta - phi)/2
0.0383859
We check our calculations
{{Cos[psi], -Sin[psi]}, {Sin[psi],Cos[psi]}}.{{Sqrt[2] Cos[chi],0}, {0,Sqrt[2] Sin[chi]}}. {{Cos[π/4], -Sin[π/4]}, {Sin[π/4], Cos[π/4]}}.{{P, 0}, {0, -Q}}
\( \displaystyle \quad \begin{pmatrix} 5. & 2. \\ 3. & 1. \end{pmatrix} \)
Now we see directly that our decomposition of a linear map into a product of rotations and general rescalings works. In particular, we can write A as A = AAAA₄ with \begin{align*} \mathbf{A}_1 &= \begin{bmatrix} \cos\psi & - \sin\psi \\ \sin\psi & \cos\psi \end{bmatrix} \approx \begin{bmatrix} 0.876606 & -0.481209 \\ 0.481209 & 0.876606 \end{bmatrix} , \\ \mathbf{A}_2 &= \begin{bmatrix} \sqrt{2}\cos \chi & 0 \\ 0 & \sqrt{2}\sin\chi \end{bmatrix} \approx \begin{bmatrix} 1.41317 & 0 \\ 0 & 0.0542726 \end{bmatrix} , \\ \mathbf{A}_3 &= \begin{bmatrix} \cos \left( \frac{\pi}{4} \right) & -\sin \left( \frac{\pi}{4} \right) \\ \cos \left( \frac{\pi}{4} \right) & \sin \left( \frac{\pi}{4} \right) \end{bmatrix} \approx \begin{bmatrix} 0.707107 & - 0.707107 \\ 0.707107 & 0.707107 \end{bmatrix} , \\ \mathbf{A}_4 &= \begin{bmatrix} \sqrt{34} & 0 \\ 0 & -\sqrt{5} \end{bmatrix} \approx \begin{bmatrix} 5.83095 & 0 \\ 0 & 2.23607 \end{bmatrix} . \end{align*} This decomposition cannot be reordered, as you are asled to verify in Ecercise 2. We check with Mathematica decomposition of the given matrix A into product of four matrices.
A1 = {{Cos[psi], -Sin[psi]}, {Sin[psi], Cos[psi]}}; A2 = {{Sqrt[2]*Cos[chi], 0}, {0, Sqrt[2]*Sin[chi]}} ; A3 = {{1/Sqrt[2], -1/Sqrt[2]}, {1/Sqrt[2], 1/Sqrt[2]}}; A4 = {{Sqrt[34], 0}, {0, -Sqrt[5]}}; A1.A2.A3.A4
{{5., 2.}, {3., 1.}}
   ■
End of Example 2
   
  1. Express each of the folowing matrices in the product given in Eq.\eqref{Eq2D.5}.
    \[ {\bf (a) \ \ } \ \begin{bmatrix} -\sqrt{5} & 0 \\ 2 & 5 \end{bmatrix} , \qquad \quad {\bf (b) \ \ } \ \begin{bmatrix} - \frac{1}{\sqrt{2}} & -\frac{3}{\sqrt{2}} \\ \frac{3}{\sqrt{2}} & - \frac{3}{\sqrt{2}} \end{bmatrix} , \]
    \[ {\bf (c) \ \ } \ \begin{bmatrix} 5\sqrt{2} & - \frac{13}{\sqrt{2}} \\ 5\sqrt{2} & \frac{13}{\sqrt{2}} \end{bmatrix} , \qquad {\bf (d) \ \ } \ \begin{bmatrix} 0 & -\frac{\sqrt{3}}{6} \\ \frac{1}{2} & - \frac{1}{6} \end{bmatrix} , \]
  2. In Example 2, it was shown that the given matrix A can be decomposed into prooduct of four matrices AA₁AAA₄. Determine whether any of the following reorderings agree with the decomposition of matrix A.
    \[ {\bf (a) \ \ } \ \mathbf{A}_4 \mathbf{A}_3 \mathbf{A}_2 \mathbf{A}_1 , \qquad {\bf (b) \ \ } \ \mathbf{A}_2 \mathbf{A}_1 \mathbf{A}_2 \mathbf{A}_1 , \qquad {\bf (c) \ \ } \ \mathbf{A}_3 \mathbf{A}_2 \mathbf{A}_1 \mathbf{A}_4 . \]
  3. If A ∈ ℝ2×2 is a singular matrix (so detA = 0), can it still be decomposed into product \eqref{Eq2D.5} ?
  4. Decompose the 2 × 2 off-diagonal matrix \( \displaystyle \quad \begin{bmatrix} 0&b \\ c&0 \end{bmatrix} . \)
  5. Consider a linear transformation that maps a point (x, y) into (2xy, 0). Express the corresponding matrix as the product \eqref{Eq2D.5}.
  6. Let
    \[ \mathbf{A} = \begin{bmatrix} -5 + 8\mathbf{j} & 2 - \mathbf{j} \\ 3 - 2 \mathbf{j} & -1 + 4 \mathbf{j} \end{bmatrix} = \begin{bmatrix} -5 & 2 \\ 3 & -1\end{bmatrix} + {\bf j} \begin{bmatrix} 8 & -1 \\ 3 & 4 \end{bmatrix} , \]
    where j is the imaginary unit vector on complex plane ℂ, so j ² = −1. Decompose the real part and the imaginary part of matrix A according to formula \eqref{Eq2D.5} .
  1. Vector addition
  2. Tea
  3. Milk