Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to Mathematica tutorial for the fourth course APMA0360
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part IV of the course APMA0340
Introduction to Linear Algebra with Mathematica
Glossary
Preface
In mathematics and physics, multiple-scale analysis (also called the method of multiple scales) comprises techniques used to construct uniformly valid approximations to the solutions of perturbation problems, both for small as well as large values of the independent variables. This is done by introducing fast-scale and slow-scale variables for an independent variable, and subsequently treating these variables, fast and slow, as if they are independent. In the solution process of the perturbation problem thereafter, the resulting additional freedom – introduced by the new independent variables – is used to remove (unwanted) secular terms. The latter puts constraints on the approximate solution, which are called solvability conditions.
Multiple scale Perturbation
Multi-scale perturbation theory, also known as the method of multiple scales, is a technique for constructing uniformly valid approximate solutions to perturbation problems by introducing multiple independent time or spatial scales. It extends regular perturbation methods by defining new, slower-scale variables (e.g., 𝜏 = εt, where ε is a small parameter) that are treated as independent from the original fast-scale variables. This approach allows the solution to be expanded on these different scales, with corrections to the base solution becoming functions of both fast and slow variables. By using solvability conditions, it prevents the growth of unwanted secular terms that plague standard perturbation methods, enabling the analysis of slow variations in the amplitude and frequency of solutions over time.
We outline the main steps in implementing the multiple scale perturbation method.
- Introduce New Scales: The core idea is to recognize that a system's behavior might involve distinct processes occurring at different rates (fast and slow). Instead of a single independent variable, the method introduces new, scaled versions of the variable. For example, if you have a fast time t and a small parameter ε, you might define a slow time τ = εt.
- Expand the Solution: The solution is then expanded in terms of both the original (fast) and new (slow) independent variables. For instance, a solution might be written as u(t, ε) = u₀(t₀, i>t₁, …) + εu₁(t₀, i>t₁, …) + ⋯, where t₀ is the fast scale and t₁ is the slow scale.
- Construct a Hierarchy of Equations: Substituting this expansion into the original differential equation results in a hierarchy of equations for each order of the small parameter ε.
- Apply Solvability Conditions: The key to this method is the use of solvability conditions. These conditions ensure that the solutions at each order do not contain secular terms (terms that grow unboundedly with time), which would render the approximation invalid.
- Determine Evolution of Constants: By applying solvability conditions, constraints are placed on the approximate solution, often leading to the discovery that the constants in the leading-order solution can vary with the slow time scale. The evolution of these constants is then determined by the next-order solvability conditions. 
The Poincaré[--Linstedt method provides a way to construct asymptotic approximations of periodic solutions, but it cannot be used to obtain solutions that evolve aperiodically on a slow time-scale. The method of multiple scales (MMS) is a more general approach in which we introduce one or more new ‘slow’ time variables for each time scale of interest in the problem. It does not require that the solution depends periodically on the ‘slow’ time variables.
We suppose that ε ⪡ 1 and \[ \delta = \varepsilon \delta_1 , \] where δ₁ = O(1) as ε → 0. We consider the case k = 2, which corresponds to the strongest instability, when y(t<, ε) satisfies \[ y'' + \left( 1 + \varepsilon \delta_1 + \varepsilon\,\cos 2t \right) y = 0 . \] The idea of the multiple scale perturbation method is to describe the evolution of the solution over long timescales of the order 1/ε by the introduction of an additional ‘slow’ time variable \[ \tau = \varepsilon t . \] We then look for a solution of the form \[ y(t, \varepsilon ) = \hat{y} (t, \varepsilon t , \varepsilon ) , \] where ŷ(t, τ, ε) is a function of two time variables (t, τ) that gives y when τ is evaluated at εt.
Applying the chain rule, we find that \begin{align*} y' &= \hat{y}_t + \varepsilon \hat{y}_{\tau} , \\ y'' &= \hat{y}_{tt} + 2 \varepsilon \hat{y}_{t\tau} + \varepsilon^2 \hat{y}_{\tau\tau} , \end{align*} where the subscripts denote partial derivatives. Using this result in the original equation, and denoting partial derivatives by subscripts, we find that ŷ(t, τ, ε) satisfies \[ \hat{y}_{tt} + 2 \varepsilon\,\hat{y}_{t\tau} + \varepsilon^2 \hat{y}_{\tau\tau} + \left( 1 + \varepsilon \delta_1 + \varepsilon\, \cos 2t \right) \hat{y} = 0 . \] In fact, ŷ(t, τ, ε) only has to satisfy this equation when τ = εt, but will require that it satisfies it for all (t, τ). This requirement implies that y satisfies the original ODE. We have therefore replaced an ODE for y by a PDE for ŷ. At first sight, this may not appear to be an improvement, but as we shall see we can use the extra flexibility provided by the dependence of ŷ on two variables to obtain an asymptotic solution for y that is valid for long times of the order 1/ε. Specifically, we will require that y(t, τ, ε) is a periodic function of the ‘fast’ variable t. Moreover, we only need to solve ODEs in t to construct this asymptotic solution.
We expand \[ \hat{y} (t, \tau , \varepsilon ) = y_0 (t, \tau ) + \varepsilon y_1 (t, \tau ) + O\left( \varepsilon^2 \right) . \] We use this expansion in the equation for ŷ, and equate coefficients of ε⁰ and ε to zero. We find that \begin{align*} y_{0tt} + y_0 &= 0 , \\ y_{1tt} + y_1 + 2 y_{0t\tau} + \left( \delta_1 + \cos 2t \right) y_0 &= 0 . \end{align*} The solution of the first equation is \[ y(t, \tau ) = A(\tau )\,e^{{\bf j}t} + c.c. \] Here, it is convenient to use complex notation. The amplitude A(τ) is an arbitrary complex valued function of the ‘slow’ time, and c.c. denotes the complex conjugate of the preceding terms.
Using this solution in the second equation, and writing the cosine in terms of exponentials, we find that y₁ satisfies \begin{align*} y_{1tt} + y_1 &= -2\mathbf{j} A_{\tau} e^{\mathbf{j}t} - A\left( \delta_1 + \cos 2t \right) e^{\mathbf{j}t} + c.c \\ &= -\frac{1}{2}\,A\,e^{3\mathbf{j}t} - \left( 2\mathbf{j} A_{\tau} + \delta_1 A + \frac{1}{2}\,A^{\ast} \right) e^{\mathbf{j}t} + c.c. \end{align*} Here, the asterisk denotes a complex conjugate. The solution for y₁ is periodic in t, and does not contain secular terms in t, if and only if the coefficient of the resonant term ejt is zero, which implies that A(τ) satisfies the ODE \[ 2\mathbf{j} A_{\tau} + \delta_1 A + \frac{1}{2}\,A^{\ast} = 0 . \] Writing A = u + jv in terms of its real and imaginary parts, we find that \[ \begin{pmatrix} u \\ v \end{pmatrix}_{\tau} = \begin{bmatrix} 0& \delta_1 /2 - 1/4 \\ -\delta_1 /2 - 1/4 & 0 \end{bmatrix} \begin{pmatrix} u \\ v \end{pmatrix} \] The solutions of this equation are proportional to e±λτ where \[ \lambda = \frac{1}{2} \sqrt{\frac{1}{4} - \delta_1^2} . \] Thus, in the limit ε → 0 the equilibrium y = 0 is unstable when |δ₁| < ½, or \[ |\delta | < |\varepsilon | . \] ■
We have to determine both the period T(ε) and the amplitude 𝑎(ε) of the limit cycle. Since the ODE is autonomous, we can make a time-shift so that y′(0) = 0. Thus, we want to solve the ODE subject to the conditions that \begin{align*} y(t+T, \varepsilon ) &= y(t, \varepsilon ), \\ y(0, \varepsilon )&= a(\varepsilon ) , \\ y'(0, \varepsilon )&= 0 . \end{align*} ■
- Lin, C.C. and Segel, L.A., Mathematics Applied to Deterministic Problems in the Natural Sciences, SIAM, Society for Industrial and Applied Mathematics, 1988.
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions