17.1 Second-Order Differential Equations

17.1.1 Introduction

The easiest second-order differential equations to solve are those which we can integrate directly, for example

d2ydx2=cos(x)\frac{d^{2}y}{dx^{2}}=\cos(x) (17.1)

When we integrate this once, we get that

d2ydx2𝑑x=cos(x)𝑑x\displaystyle\int\frac{d^{2}y}{dx^{2}}dx=\int\cos(x)dx (17.2)
dydx=sin(x)+c\displaystyle\frac{dy}{dx}=\sin(x)+c (17.3)

and then integrating again, we get that

dydx𝑑x=sin(x)+cdx\displaystyle\int\frac{dy}{dx}dx=\int\sin(x)+cdx (17.4)
y(x)=cos(x)+cx+d\displaystyle y(x)=-\cos(x)+cx+d (17.5)

Which is the general solution to this particular second-order differential equation. With first-order differential equations we only have one constant, and we can determine the value of the constant given a single point on the curve. 11 1 In more formal notation, for the differential equation dydx=f(x)\frac{dy}{dx}=f(x) with solution y(x)=F(x)+cy(x)=F(x)+c we can determine the value of cc given a value of xx, x0x_{0} and the corresponding value of yy, y0y_{0} at that point.

For a second-order differential equation, however, we have two constants, so we definitely can’t solve the equation given only point. We need either two points on the curve, or one point on the original curve and one point on its first derivative.

17.1.2 Homogenous second-order ODEs

These are a bit icky, because they’re not solvable in general. Fortunately a lot of them are solvable.

Homogenous (i.e. everything is a function of only one variable) second-order, linear differential equations can be solved without too much difficulty. We can reduce an equation in the form

ad2ydx2+bdydx+cy=0a\frac{d^{2}y}{dx^{2}}+b\frac{dy}{dx}+cy=0 (17.6)

to a quadratic by setting y=eλxdydx=λeλxd2ydx2=λ2eλxy=e^{\lambda x}\implies\frac{dy}{dx}=\lambda e^{\lambda x}\implies\frac{d^{2}y% }{dx^{2}}=\lambda^{2}e^{\lambda x}.

From here,

aλ2eλx+bλeλx+ceλx=0\displaystyle a\lambda^{2}e^{\lambda x}+b\lambda e^{\lambda x}+ce^{\lambda x}=0
aλ2+bλ+c=0 Which is fine as ex>0\displaystyle a\lambda^{2}+b\lambda+c=0\text{ Which is fine as $e^{x}>0$}

Here there are a number of possibilities for the value of the discriminant 22 2 If you’ve no idea what this is, review the earlier section on quadratics..

  • b24ac>0b^{2}-4ac>0, which is the "straightforward" case

  • b24ac=0b^{2}-4ac=0, in which case there’s only one value of λ\lambda

  • b24ac<0b^{2}-4ac<0, in which case we can use complex numbers and trigonometry.

In the case where Δ>0\Delta>0 33 3 note that Δ\Delta means ”the disciminant” we have two possible solutions to the differential equation,

eb±b24ac2axe^{\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}x} (17.7)

In the case where Δ=0\Delta=0 we have only


And in the case where Δ<0\Delta<0 we have the same case as in Equation 17.7, except that there’s a complex part to the root44 4 And, as Euler might point out, eiθ=cos(θ)+isin(θ)e^{i\theta}=\cos(\theta)+i\sin(\theta)

17.1.3 Why two linearly independent solutions?

55 5 If two vectors (which we can call v1v_{1} and v2v_{2}) are linearly independent, then the only values of α1\alpha_{1} and α2\alpha_{2} which solve the equation α1v1+α2v2=0\alpha_{1}v_{1}+\alpha_{2}v_{2}=0 are α1=α2=0\alpha_{1}=\alpha_{2}=0. If there were a different combination, then we could write that v1=α2α1v2v_{1}=-\frac{\alpha_{2}}{\alpha_{1}}v_{2} And thus they wouldn’t be linearly independent, as one of the vectors is a multiple of the other. This can be generalised to nn vectors.

Let’s suppose we have a differential equation involving y(x)y(x) and its first and second derivative. We may want to solve this subject to the initial conditions

y(x0)=y0 and dydx|x=x0=y˙0y(x_{0})=y_{0}\text{ and }\frac{dy}{dx}\Bigr{|}_{\begin{subarray}{c}x=x_{0}% \end{subarray}}=\dot{y}_{0} (17.8)

In order to be able to solve for every possible initial condition, we need a linear combination of two "linearly independent" solutions, using which we can satisfy any possible initial condition. This means that if we have two solutions y1(x)y_{1}(x) and y2(x)y_{2}(x) and neither can be written as a multiple of the other, then the "general solution" (i.e. the one with the constants in it, like +c+c for first-order differential equations when we integrate them) is of the form

y(x)=αy1(x)+βy2(x)y(x)=\alpha y_{1}(x)+\beta y_{2}(x) (17.9)

The good news is that for homogenous second-order differential equations we do have two linearly independent solutions!66 6 Except in the case where Δ=0\Delta=0, but there’s a way to get around that (more on that later). For example if our roots of the auxiliary are α±β\alpha\pm\beta, then we would have that the two solutions are

77 7 Which are linearly independent, as one of them cannot be written as a multiple (of something linear) of the other.
y(x)=e(α+β)x and y(x)=e(αβ)xy(x)=e^{(\alpha+\beta)x}\text{ and }y(x)=e^{(\alpha-\beta)x} (17.10)

Proof that we need two linearly independent solutions, and that two linearly independent solutions are sufficient to solve any initial condition (note: no A Level exam board examines this). The reason we need two linearly independent solutions is that if we have two sets of linearly independent initial conditions we certainly can’t write both of them the as linear combinations of a single solution. 88 8 This is a lot like how we can write any vector in a 2D space in terms of two vectors, so long as those vectors are not parallel (same for 3D space, except with three vectors). For example, suppose we have two possible initial conditions, one where

y(x0)=1 and dydx|x=x0=0y(x_{0})=1\text{ and }\frac{dy}{dx}\Bigr{|}_{\begin{subarray}{c}x=x_{0}\end{% subarray}}=0 (17.11)

and another where

y(x0)=0 and dydx|x=x0=1y(x_{0})=0\text{ and }\frac{dy}{dx}\Bigr{|}_{\begin{subarray}{c}x=x_{0}\end{% subarray}}=1 (17.12)

These are linearly independent solutions, and we can’t write them both as a linear combination of a single solution. Thus, there is at least one case where we need at least two linearly independent solutions.

The next thing to prove is that if we have two solutions (y1(x)y_{1}(x) and y2(x)y_{2}(x)) which are linearly independent, then for suitable values of α\alpha and β\beta we can satisfy any initial condition using a solution of the form

y(x)=αy1(x)+βy2(x)y(x)=\alpha y_{1}(x)+\beta y_{2}(x) (17.13)

Does this even solve the differential equation though? Yes! We can prove this with a bunch of algebra (there’s an easier way to do this by introducing some new notation, but that’s for later99 9 Note: I have yet to write about this new notation)

y=αy1(x)+βy2(x)\displaystyle y=\alpha y_{1}(x)+\beta y_{2}(x) dydx=αdy1dx+βdy2dx\displaystyle\implies\frac{dy}{dx}=\alpha\frac{dy_{1}}{dx}+\beta\frac{dy_{2}}{dx} (17.14)
d2ydx2=αd2y1dx2+βd2y2dx2\displaystyle\implies\frac{d^{2}y}{dx^{2}}=\alpha\frac{d^{2}y_{1}}{dx^{2}}+% \beta\frac{d^{2}y_{2}}{dx^{2}} (17.15)

and thus that

1010 10 Note that both α[ad2y1dx2+bdy1dx+cy1]\alpha[a\frac{d^{2}y_{1}}{dx^{2}}+b\frac{dy_{1}}{dx}+cy_{1}] and β[ad2y2dx2+bdy2dx+cy2]\beta[a\frac{d^{2}y_{2}}{dx^{2}}+b\frac{dy_{2}}{dx}+cy_{2}] are zero, because we know that y1(x)y_{1}(x) and y2(x)y_{2}(x) solve the equation ad2ydx2+bdydx+cy=0a\frac{d^{2}y}{dx^{2}}+b\frac{dy}{dx}+cy=0


a[αd2y1dx2+βd2y2dx2]\displaystyle a[\alpha\frac{d^{2}y_{1}}{dx^{2}}+\beta\frac{d^{2}y_{2}}{dx^{2}}] (17.16)
+b[αdy1dx+βdy2dx]\displaystyle\quad+b[\alpha\frac{dy_{1}}{dx}+\beta\frac{dy_{2}}{dx}] (17.17)
+c[αy1(x)+βy2(x)]\displaystyle\quad+c[\alpha y_{1}(x)+\beta y_{2}(x)] =p[ad2y1dx2+bdy1dx+cy1]\displaystyle=p[a\frac{d^{2}y_{1}}{dx^{2}}+b\frac{dy_{1}}{dx}+cy_{1}] (17.18)
=0\displaystyle=0 (17.19)

And thus y=αy1(x)+βy2(x)y=\alpha y_{1}(x)+\beta y_{2}(x) is a solution to Equation 17.6

If that was messy, it gets worse. 1111 11 I’ve had nightmares about drowning in a differential equation algebra soup. Literal soup made of algebra - it was a very strange dream.

From Equation 17.13 we know that the derivative of our solution will be

dydx=αdy1dx+αdy2dx\frac{dy}{dx}=\alpha\frac{dy_{1}}{dx}+\alpha\frac{dy_{2}}{dx} (17.20)

as we want to show that this can satisfy any set of initial conditions, where y(x0)=y0y(x_{0})=y_{0} and y˙(x0)=y˙0\dot{y}(x_{0})=\dot{y}_{0}, we can start by writing a set of simultaneous equations

{αy1(x0)+βy2(x0)=y0αy˙1(x0)+βy˙2(x0)=y˙0\begin{cases}\alpha y_{1}(x_{0})+\beta y_{2}(x_{0})=y_{0}\\ \alpha\dot{y}_{1}(x_{0})+\beta\dot{y}_{2}(x_{0})=\dot{y}_{0}\end{cases} (17.21)

we can write these in matrix form and obtain that

(y1(x0)y2(x0)y˙1(x0)y˙2(x0))(αβ)=(y0y˙1)\begin{pmatrix}y_{1}(x_{0})&y_{2}(x_{0})\\ \dot{y}_{1}(x_{0})&\dot{y}_{2}(x_{0})\end{pmatrix}\begin{pmatrix}\alpha\\ \beta\end{pmatrix}=\begin{pmatrix}y_{0}\\ \dot{y}_{1}\end{pmatrix} (17.22)

These equations can be solved whenever the determinant of the 2x2 matrix above is not equal to zero, i.e. whenever

y1(x0)y˙2(x0)y2(x0)y˙1(x0)0y_{1}(x_{0})\dot{y}_{2}(x_{0})-y_{2}(x_{0})\dot{y}_{1}(x_{0})\neq 0 (17.23)

Then the two equations have a unique solution. The easiest way to go from here is to prove this by contradiction (as we have lots of techniques for dealing with equalities (==) and not very many for dealing with inequalities involving \neq). We can proceed by assuming that two linearly independent solutions are not sufficient to determine the general solution of any second-order differential equation, and write that

y1(x0)y˙2(x0)y2(x0)y˙1(x0)=0y_{1}(x_{0})\dot{y}_{2}(x_{0})-y_{2}(x_{0})\dot{y}_{1}(x_{0})=0 (17.24)

We can now manipulate this a little

y1(x0)y˙2(x0)=y2(x0)y˙1(x0)y_{1}(x_{0})\dot{y}_{2}(x_{0})=y_{2}(x_{0})\dot{y}_{1}(x_{0}) (17.25)

Dividing through leads to the formula

y1(x0)y2(x0)=y˙1(x0)y˙2(x0)\frac{y_{1}(x_{0})}{y_{2}(x_{0})}=\frac{\dot{y}_{1}(x_{0})}{\dot{y}_{2}(x_{0})} (17.26)

Here, though it looks like we have a contraction. Why? Let’s set

c=y1(x0)y2(x0)c=\frac{y_{1}(x_{0})}{y_{2}(x_{0})} (17.27)


d=y˙1(x0)y˙2(x0)d=\frac{\dot{y}_{1}(x_{0})}{\dot{y}_{2}(x_{0})} (17.28)

Then we can write that

y1(x0)=cy2(x0)y_{1}(x_{0})=cy_{2}(x_{0}) (17.29)

and that

y˙1(x0)=dy˙2(x0)\dot{y}_{1}(x_{0})=d\dot{y}_{2}(x_{0}) (17.30)

But we specified earlier that y1(x)y_{1}(x) and y2(x)y_{2}(x) are linearly independent! And now we’ve found that in Equation 17.29 that they’re not linearly independent, and thus we’ve found that assuming that two linearly independent solutions doesn’t solve any set of initial conditions for a second-order differential equation leads to a contradiction. Hence, two linearly independent solutions are sufficient.