17.1 SecondOrder Differential Equations
17.1.1 Introduction
The easiest secondorder differential equations to solve are those which we can integrate directly, for example
When we integrate this once, we get that
$\displaystyle\int\frac{d^{2}y}{dx^{2}}dx=\int\cos(x)dx$  (17.2)  
$\displaystyle\frac{dy}{dx}=\sin(x)+c$  (17.3) 
and then integrating again, we get that
$\displaystyle\int\frac{dy}{dx}dx=\int\sin(x)+cdx$  (17.4)  
$\displaystyle y(x)=\cos(x)+cx+d$  (17.5) 
Which is the general solution to this particular secondorder differential equation. With firstorder differential equations we only have one constant, and we can determine the value of the constant given a single point on the curve. ^{1}^{1} 1 In more formal notation, for the differential equation $\frac{dy}{dx}=f(x)$ with solution $y(x)=F(x)+c$ we can determine the value of $c$ given a value of $x$, $x_{0}$ and the corresponding value of $y$, $y_{0}$ at that point.
For a secondorder differential equation, however, we have two constants, so we definitely can’t solve the equation given only point. We need either two points on the curve, or one point on the original curve and one point on its first derivative.
17.1.2 Homogenous secondorder ODEs
These are a bit icky, because they’re not solvable in general. Fortunately a lot of them are solvable.
Homogenous (i.e. everything is a function of only one variable) secondorder, linear differential equations can be solved without too much difficulty. We can reduce an equation in the form
to a quadratic by setting $y=e^{\lambda x}\implies\frac{dy}{dx}=\lambda e^{\lambda x}\implies\frac{d^{2}y% }{dx^{2}}=\lambda^{2}e^{\lambda x}$.
From here,
$\displaystyle a\lambda^{2}e^{\lambda x}+b\lambda e^{\lambda x}+ce^{\lambda x}=0$  
$\displaystyle a\lambda^{2}+b\lambda+c=0\text{ Which is fine as $e^{x}>0$}$  
$\displaystyle\lambda=\frac{b\pm\sqrt{b^{2}4ac}}{2a}$ 
Here there are a number of possibilities for the value of the discriminant ^{2}^{2} 2 If you’ve no idea what this is, review the earlier section on quadratics..

•
$b^{2}4ac>0$, which is the "straightforward" case

•
$b^{2}4ac=0$, in which case there’s only one value of $\lambda$

•
$b^{2}4ac<0$, in which case we can use complex numbers and trigonometry.
In the case where $\Delta>0$ ^{3}^{3} 3 note that $\Delta$ means ”the disciminant” we have two possible solutions to the differential equation,
In the case where $\Delta=0$ we have only
And in the case where $\Delta<0$ we have the same case as in Equation 17.7, except that there’s a complex part to the root^{4}^{4} 4 And, as Euler might point out, $e^{i\theta}=\cos(\theta)+i\sin(\theta)$
17.1.3 Why two linearly independent solutions?
^{5}^{5} 5 If two vectors (which we can call $v_{1}$ and $v_{2}$) are linearly independent, then the only values of $\alpha_{1}$ and $\alpha_{2}$ which solve the equation $\alpha_{1}v_{1}+\alpha_{2}v_{2}=0$ are $\alpha_{1}=\alpha_{2}=0$. If there were a different combination, then we could write that $v_{1}=\frac{\alpha_{2}}{\alpha_{1}}v_{2}$ And thus they wouldn’t be linearly independent, as one of the vectors is a multiple of the other. This can be generalised to $n$ vectors.Let’s suppose we have a differential equation involving $y(x)$ and its first and second derivative. We may want to solve this subject to the initial conditions
In order to be able to solve for every possible initial condition, we need a linear combination of two "linearly independent" solutions, using which we can satisfy any possible initial condition. This means that if we have two solutions $y_{1}(x)$ and $y_{2}(x)$ and neither can be written as a multiple of the other, then the "general solution" (i.e. the one with the constants in it, like $+c$ for firstorder differential equations when we integrate them) is of the form
The good news is that for homogenous secondorder differential equations we do have two linearly independent solutions!^{6}^{6} 6 Except in the case where $\Delta=0$, but there’s a way to get around that (more on that later). For example if our roots of the auxiliary are $\alpha\pm\beta$, then we would have that the two solutions are
Proof that we need two linearly independent solutions, and that two linearly independent solutions are sufficient to solve any initial condition (note: no A Level exam board examines this). The reason we need two linearly independent solutions is that if we have two sets of linearly independent initial conditions we certainly can’t write both of them the as linear combinations of a single solution. ^{8}^{8} 8 This is a lot like how we can write any vector in a 2D space in terms of two vectors, so long as those vectors are not parallel (same for 3D space, except with three vectors). For example, suppose we have two possible initial conditions, one where
and another where
These are linearly independent solutions, and we can’t write them both as a linear combination of a single solution. Thus, there is at least one case where we need at least two linearly independent solutions.
The next thing to prove is that if we have two solutions ($y_{1}(x)$ and $y_{2}(x)$) which are linearly independent, then for suitable values of $\alpha$ and $\beta$ we can satisfy any initial condition using a solution of the form
Does this even solve the differential equation though? Yes! We can prove this with a bunch of algebra (there’s an easier way to do this by introducing some new notation, but that’s for later^{9}^{9} 9 Note: I have yet to write about this new notation)
$\displaystyle y=\alpha y_{1}(x)+\beta y_{2}(x)$  $\displaystyle\implies\frac{dy}{dx}=\alpha\frac{dy_{1}}{dx}+\beta\frac{dy_{2}}{dx}$  (17.14)  
$\displaystyle\implies\frac{d^{2}y}{dx^{2}}=\alpha\frac{d^{2}y_{1}}{dx^{2}}+% \beta\frac{d^{2}y_{2}}{dx^{2}}$  (17.15) 
and thus that
.
$\displaystyle a[\alpha\frac{d^{2}y_{1}}{dx^{2}}+\beta\frac{d^{2}y_{2}}{dx^{2}}]$  (17.16)  
$\displaystyle\quad+b[\alpha\frac{dy_{1}}{dx}+\beta\frac{dy_{2}}{dx}]$  (17.17)  
$\displaystyle\quad+c[\alpha y_{1}(x)+\beta y_{2}(x)]$  $\displaystyle=p[a\frac{d^{2}y_{1}}{dx^{2}}+b\frac{dy_{1}}{dx}+cy_{1}]$  (17.18)  
$\displaystyle\quad+q[a\frac{d^{2}y_{2}}{dx^{2}}+b\frac{dy_{2}}{dx}+cy_{2}]$  
$\displaystyle=0$  (17.19) 
And thus $y=\alpha y_{1}(x)+\beta y_{2}(x)$ is a solution to Equation 17.6
If that was messy, it gets worse. ^{11}^{11} 11 I’ve had nightmares about drowning in a differential equation algebra soup. Literal soup made of algebra  it was a very strange dream.
From Equation 17.13 we know that the derivative of our solution will be
as we want to show that this can satisfy any set of initial conditions, where $y(x_{0})=y_{0}$ and $\dot{y}(x_{0})=\dot{y}_{0}$, we can start by writing a set of simultaneous equations
we can write these in matrix form and obtain that
These equations can be solved whenever the determinant of the 2x2 matrix above is not equal to zero, i.e. whenever
Then the two equations have a unique solution. The easiest way to go from here is to prove this by contradiction (as we have lots of techniques for dealing with equalities ($=$) and not very many for dealing with inequalities involving $\neq$). We can proceed by assuming that two linearly independent solutions are not sufficient to determine the general solution of any secondorder differential equation, and write that
We can now manipulate this a little
Dividing through leads to the formula
Here, though it looks like we have a contraction. Why? Let’s set
and
Then we can write that
and that
But we specified earlier that $y_{1}(x)$ and $y_{2}(x)$ are linearly independent! And now we’ve found that in Equation 17.29 that they’re not linearly independent, and thus we’ve found that assuming that two linearly independent solutions doesn’t solve any set of initial conditions for a secondorder differential equation leads to a contradiction. Hence, two linearly independent solutions are sufficient.