To this point you have found solutions to equations almost exclusively by algebraic manipulation. This is possible only for the artificially simple equations of problem sets and tests. In the “real world” it is very common to encounter equations that cannot be solved by algebraic manipulation. For example, you found, by completing a square, that the solutions to the quadratic equation \(ax^2+bx+c=0\) are \(x=\big(-b\pm\sqrt{b^2-4ac}\big)/2a\text{.}\) But it is known that there simply does not exist a corresponding formula for the roots of a general polynomial of degree five or more. Fortunately, encountering such an equation is not the end of the world, because usually one does not need to know the solutions exactly. One only needs to know them to within some specified degree of accuracy. For example, one rarely needs to know \(\pi\) to more than a few decimal places. There is a whole subject, called numerical analysis, that concerns using algorithms to solve equations (and perform other tasks) approximately, to any desired degree of accuracy.
We have already had, in Examples 1.6.14 and 1.6.15, and the lead up to them, a really quick introduction to the bisection method, which is a crude, but effective, algorithm for finding approximate solutions to equations of the form \(f(x)=0\text{.}\) We shall shortly use a little calculus to derive a very efficient algorithm for finding approximate solutions to such equations. But first here is a simple example which provides a review of some of the basic ideas of root finding and the bisection method.
ExampleC.0.1.Bisection method.
Suppose that we are given some function \(f(x)\) and we have to find solutions to the equation \(f(x)=0\text{.}\) To be concrete, suppose that \(f(x) = 8x^3+12x^2+6x-15\text{.}\) How do we go about solving \(f(x)=0\text{?}\) To get a rough idea of the lay of the land, sketch the graph of \(f(x)\text{.}\) First observe that
when \(x\) is very large and negative, \(f(x)\) is very large and negative
when \(x\) is very large and positive, \(f(x)\) is very large and positive
when \(x=0\text{,}\)\(f(x) =f(0) = -15\lt 0\)
when \(x=1\text{,}\)\(f(x) =f(1) = 11\gt 0\)
\(f'(x) = 24x^2+24x+6 = 24\big(x^2+x+\frac{1}{4}\big)
=24\big(x+\frac{1}{2}\big)^2\ge 0\) for all \(x\text{.}\) So \(f(x)\) increases monotonically with \(x\text{.}\) The graph has a tangent of slope \(0\) at \(x=-\frac{1}{2}\) and tangents of strictly positive slope everywhere else.
This tells us that the graph of \(f(x)\) looks like
Since \(f(x)\) strictly increases 1
By “\(f(x)\) is strictly increasing” we mean that \(f(a)\lt f(b)\) whenever \(a\lt b\text{.}\) As \(f'(x)\gt 0\) for all \(x\ne-\frac{1}{2}\text{,}\)\(f(x)\) is strictly increasing even as \(x\) passes through \(-\frac{1}{2}\text{.}\) For example, for any \(x\gt -\frac{1}{2}\text{,}\) the mean value theorem (Theorem 2.13.5) tells us that there is a \(c\) strictly between \(-\frac{1}{2}\) and \(x\) such that \(f(x)-f\big(-\frac{1}{2}\big)
= f'(c)\big(x+\frac{1}{2}\big)\gt 0\text{.}\)
as \(x\) increases, \(f(x)\) can take the value zero for at most one value of \(x\text{.}\)
Since \(f(0)\lt 0\) and \(f(1)\gt 0\) and \(f\) is continuous, \(f(x)\) must pass through \(0\) as \(x\) travels from \(x=0\) to \(x=1\text{,}\) by Theorem 1.6.12 (the intermediate value theorem). So \(f(x)\) takes the value zero for some \(x\) between \(0\) and \(1\text{.}\) We will often write this as “the root is \(x=0.5\pm 0.5\)” to indicate the uncertainty.
To get closer to the root, we evaluate \(f(x)\) halfway between \(0\) and \(1\text{.}\)
Since \(f\big(\frac{1}{2}\big)\lt 0\) and \(f(1)\gt 0\) and \(f\) is continuous, \(f(x)\) must take the value zero for some \(x\) between \(\frac{1}{2}\) and \(1\text{.}\) The root is \(0.75\pm 0.25\text{.}\)
To get still closer to the root, we evaluate \(f(x)\) halfway between \(\frac{1}{2}\) and \(1\text{.}\)
Since \(f\big(\frac{3}{4}\big)\lt 0\) and \(f(1)\gt 0\) and \(f\) is continuous, \(f(x)\) must take the value zero for some \(x\) between \(\frac{3}{4}\) and \(1\text{.}\) The root is \(0.875\pm 0.125\text{.}\)
And so on.
The root finding strategy used in Example C.0.1 is called the bisection method. The bisection method will home in on a root of the function \(f(x)\) whenever
\(f(x)\) is continuous (\(f(x)\) need not have a derivative) and
you can find two numbers \(a_1\lt b_1\) with \(f(a_1)\) and \(f(b_1)\) being of opposite sign.
Denote by \(I_1\) the interval \([a_1,b_1]=\big\{x\ \big|\ a_1\le x\le b_1\big\}\text{.}\) Once you have found the interval \(I_1\text{,}\) the bisection method generates a sequence \(I_1\text{,}\)\(I_2\text{,}\)\(I_3\text{,}\)\(\cdots\) of intervals by the following rule.
EquationC.0.2.(bisection method).
Denote by \(c_n=\frac{a_n+b_n}{2}\) the midpoint of the interval \(I_n=[a_n,b_n]\text{.}\) If \(f(c_n)\) has the same sign as \(f(a_n)\text{,}\) then
This rule was chosen so that \(f(a_n)\) and \(f(b_n)\) have opposite sign for every \(n\text{.}\) Since \(f(x)\) is continuous, \(f(x)\) has a zero in each interval \(I_n\text{.}\) Thus each step reduces the error bars by a factor of \(2\text{.}\) That isn’t too bad, but we can come up with something that is much more efficient. We just need a little calculus.