# 17.1 Second-Order Differential Equations

## 17.1.1 Introduction

The easiest second-order differential equations to solve are those which we can integrate directly, for example

$\frac{d^{2}y}{dx^{2}}=\cos(x)$ (17.1)

When we integrate this once, we get that

 $\displaystyle\int\frac{d^{2}y}{dx^{2}}dx=\int\cos(x)dx$ (17.2) $\displaystyle\frac{dy}{dx}=\sin(x)+c$ (17.3)

and then integrating again, we get that

 $\displaystyle\int\frac{dy}{dx}dx=\int\sin(x)+cdx$ (17.4) $\displaystyle y(x)=-\cos(x)+cx+d$ (17.5)

Which is the general solution to this particular second-order differential equation. With first-order differential equations we only have one constant, and we can determine the value of the constant given a single point on the curve. 11 1 In more formal notation, for the differential equation $\frac{dy}{dx}=f(x)$ with solution $y(x)=F(x)+c$ we can determine the value of $c$ given a value of $x$, $x_{0}$ and the corresponding value of $y$, $y_{0}$ at that point.

For a second-order differential equation, however, we have two constants, so we definitely can’t solve the equation given only point. We need either two points on the curve, or one point on the original curve and one point on its first derivative.

## 17.1.2 Homogenous second-order ODEs

These are a bit icky, because they’re not solvable in general. Fortunately a lot of them are solvable.

Homogenous (i.e. everything is a function of only one variable) second-order, linear differential equations can be solved without too much difficulty. We can reduce an equation in the form

$a\frac{d^{2}y}{dx^{2}}+b\frac{dy}{dx}+cy=0$ (17.6)

to a quadratic by setting $y=e^{\lambda x}\implies\frac{dy}{dx}=\lambda e^{\lambda x}\implies\frac{d^{2}y% }{dx^{2}}=\lambda^{2}e^{\lambda x}$.

From here,

 $\displaystyle a\lambda^{2}e^{\lambda x}+b\lambda e^{\lambda x}+ce^{\lambda x}=0$ $\displaystyle a\lambda^{2}+b\lambda+c=0\text{ Which is fine as e^{x}>0}$ $\displaystyle\lambda=\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}$

Here there are a number of possibilities for the value of the discriminant 22 2 If you’ve no idea what this is, review the earlier section on quadratics..

• $b^{2}-4ac>0$, which is the "straightforward" case

• $b^{2}-4ac=0$, in which case there’s only one value of $\lambda$

• $b^{2}-4ac<0$, in which case we can use complex numbers and trigonometry.

In the case where $\Delta>0$ 33 3 note that $\Delta$ means ”the disciminant” we have two possible solutions to the differential equation,

$e^{\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}x}$ (17.7)

In the case where $\Delta=0$ we have only

$y(x)=e^{-\frac{b}{2a}x}$

And in the case where $\Delta<0$ we have the same case as in Equation 17.7, except that there’s a complex part to the root44 4 And, as Euler might point out, $e^{i\theta}=\cos(\theta)+i\sin(\theta)$

## 17.1.3 Why two linearly independent solutions?

55 5 If two vectors (which we can call $v_{1}$ and $v_{2}$) are linearly independent, then the only values of $\alpha_{1}$ and $\alpha_{2}$ which solve the equation $\alpha_{1}v_{1}+\alpha_{2}v_{2}=0$ are $\alpha_{1}=\alpha_{2}=0$. If there were a different combination, then we could write that $v_{1}=-\frac{\alpha_{2}}{\alpha_{1}}v_{2}$ And thus they wouldn’t be linearly independent, as one of the vectors is a multiple of the other. This can be generalised to $n$ vectors.

Let’s suppose we have a differential equation involving $y(x)$ and its first and second derivative. We may want to solve this subject to the initial conditions

$y(x_{0})=y_{0}\text{ and }\frac{dy}{dx}\Bigr{|}_{\begin{subarray}{c}x=x_{0}% \end{subarray}}=\dot{y}_{0}$ (17.8)

In order to be able to solve for every possible initial condition, we need a linear combination of two "linearly independent" solutions, using which we can satisfy any possible initial condition. This means that if we have two solutions $y_{1}(x)$ and $y_{2}(x)$ and neither can be written as a multiple of the other, then the "general solution" (i.e. the one with the constants in it, like $+c$ for first-order differential equations when we integrate them) is of the form

$y(x)=\alpha y_{1}(x)+\beta y_{2}(x)$ (17.9)

The good news is that for homogenous second-order differential equations we do have two linearly independent solutions!66 6 Except in the case where $\Delta=0$, but there’s a way to get around that (more on that later). For example if our roots of the auxiliary are $\alpha\pm\beta$, then we would have that the two solutions are

77 7 Which are linearly independent, as one of them cannot be written as a multiple (of something linear) of the other.
$y(x)=e^{(\alpha+\beta)x}\text{ and }y(x)=e^{(\alpha-\beta)x}$ (17.10)

Proof that we need two linearly independent solutions, and that two linearly independent solutions are sufficient to solve any initial condition (note: no A Level exam board examines this). The reason we need two linearly independent solutions is that if we have two sets of linearly independent initial conditions we certainly can’t write both of them the as linear combinations of a single solution. 88 8 This is a lot like how we can write any vector in a 2D space in terms of two vectors, so long as those vectors are not parallel (same for 3D space, except with three vectors). For example, suppose we have two possible initial conditions, one where

$y(x_{0})=1\text{ and }\frac{dy}{dx}\Bigr{|}_{\begin{subarray}{c}x=x_{0}\end{% subarray}}=0$ (17.11)

and another where

$y(x_{0})=0\text{ and }\frac{dy}{dx}\Bigr{|}_{\begin{subarray}{c}x=x_{0}\end{% subarray}}=1$ (17.12)

These are linearly independent solutions, and we can’t write them both as a linear combination of a single solution. Thus, there is at least one case where we need at least two linearly independent solutions.

The next thing to prove is that if we have two solutions ($y_{1}(x)$ and $y_{2}(x)$) which are linearly independent, then for suitable values of $\alpha$ and $\beta$ we can satisfy any initial condition using a solution of the form

$y(x)=\alpha y_{1}(x)+\beta y_{2}(x)$ (17.13)

Does this even solve the differential equation though? Yes! We can prove this with a bunch of algebra (there’s an easier way to do this by introducing some new notation, but that’s for later99 9 Note: I have yet to write about this new notation)

 $\displaystyle y=\alpha y_{1}(x)+\beta y_{2}(x)$ $\displaystyle\implies\frac{dy}{dx}=\alpha\frac{dy_{1}}{dx}+\beta\frac{dy_{2}}{dx}$ (17.14) $\displaystyle\implies\frac{d^{2}y}{dx^{2}}=\alpha\frac{d^{2}y_{1}}{dx^{2}}+% \beta\frac{d^{2}y_{2}}{dx^{2}}$ (17.15)

and thus that

1010 10 Note that both $\alpha[a\frac{d^{2}y_{1}}{dx^{2}}+b\frac{dy_{1}}{dx}+cy_{1}]$ and $\beta[a\frac{d^{2}y_{2}}{dx^{2}}+b\frac{dy_{2}}{dx}+cy_{2}]$ are zero, because we know that $y_{1}(x)$ and $y_{2}(x)$ solve the equation $a\frac{d^{2}y}{dx^{2}}+b\frac{dy}{dx}+cy=0$

.

And thus $y=\alpha y_{1}(x)+\beta y_{2}(x)$ is a solution to Equation 17.6

If that was messy, it gets worse. 1111 11 I’ve had nightmares about drowning in a differential equation algebra soup. Literal soup made of algebra - it was a very strange dream.

From Equation 17.13 we know that the derivative of our solution will be

$\frac{dy}{dx}=\alpha\frac{dy_{1}}{dx}+\alpha\frac{dy_{2}}{dx}$ (17.20)

as we want to show that this can satisfy any set of initial conditions, where $y(x_{0})=y_{0}$ and $\dot{y}(x_{0})=\dot{y}_{0}$, we can start by writing a set of simultaneous equations

$\begin{cases}\alpha y_{1}(x_{0})+\beta y_{2}(x_{0})=y_{0}\\ \alpha\dot{y}_{1}(x_{0})+\beta\dot{y}_{2}(x_{0})=\dot{y}_{0}\end{cases}$ (17.21)

we can write these in matrix form and obtain that

$\begin{pmatrix}y_{1}(x_{0})&y_{2}(x_{0})\\ \dot{y}_{1}(x_{0})&\dot{y}_{2}(x_{0})\end{pmatrix}\begin{pmatrix}\alpha\\ \beta\end{pmatrix}=\begin{pmatrix}y_{0}\\ \dot{y}_{1}\end{pmatrix}$ (17.22)

These equations can be solved whenever the determinant of the 2x2 matrix above is not equal to zero, i.e. whenever

$y_{1}(x_{0})\dot{y}_{2}(x_{0})-y_{2}(x_{0})\dot{y}_{1}(x_{0})\neq 0$ (17.23)

Then the two equations have a unique solution. The easiest way to go from here is to prove this by contradiction (as we have lots of techniques for dealing with equalities ($=$) and not very many for dealing with inequalities involving $\neq$). We can proceed by assuming that two linearly independent solutions are not sufficient to determine the general solution of any second-order differential equation, and write that

$y_{1}(x_{0})\dot{y}_{2}(x_{0})-y_{2}(x_{0})\dot{y}_{1}(x_{0})=0$ (17.24)

We can now manipulate this a little

$y_{1}(x_{0})\dot{y}_{2}(x_{0})=y_{2}(x_{0})\dot{y}_{1}(x_{0})$ (17.25)

Dividing through leads to the formula

$\frac{y_{1}(x_{0})}{y_{2}(x_{0})}=\frac{\dot{y}_{1}(x_{0})}{\dot{y}_{2}(x_{0})}$ (17.26)

Here, though it looks like we have a contraction. Why? Let’s set

$c=\frac{y_{1}(x_{0})}{y_{2}(x_{0})}$ (17.27)

and

$d=\frac{\dot{y}_{1}(x_{0})}{\dot{y}_{2}(x_{0})}$ (17.28)

Then we can write that

$y_{1}(x_{0})=cy_{2}(x_{0})$ (17.29)

and that

$\dot{y}_{1}(x_{0})=d\dot{y}_{2}(x_{0})$ (17.30)

But we specified earlier that $y_{1}(x)$ and $y_{2}(x)$ are linearly independent! And now we’ve found that in Equation 17.29 that they’re not linearly independent, and thus we’ve found that assuming that two linearly independent solutions doesn’t solve any set of initial conditions for a second-order differential equation leads to a contradiction. Hence, two linearly independent solutions are sufficient.