Calculators, Power Series and Chebyshev Polynomials: Unterschied zwischen den Versionen

Aus DMUW-Wiki
Wechseln zu: Navigation, Suche
(Die Seite wurde geleert.)
Zeile 1: Zeile 1:
Originating author:  Graeme Cohen
 
  
 
Of all the familiar functions, such as trigonometric, exponential and logarithmic functions, surely the simplest to evaluate are polynomial functions. The purpose of this article is to introduce the concept of a power series, which can be thought of in the first place as a polynomial function of infinite degree. In particular, we will deduce a series for <math>\,\sin\,x</math> and will see how to improve on the the most straightforward way of approximating its values. This simplest way uses the polynomials obtained by truncating the power series. The improvement will involve [http://en.wikipedia.org/wiki/Chebyshev_polynomials Chebyshev polynomials], which are used in many ways for a similar purpose and in many other applications, as well. When a calculator gives values of trigonometric or exponential or logarithmic
 
functions it is doing so by evaluating polynomial functions that are sufficiently good approximations. (For trigonometric functions, the [http://en.wikipedia.org/wiki/Cordic Cordic] algorithm is in fact often the preferred method of evaluation---the subject of another article here, perhaps.) 
 
 
In the spirit of Felix Klein, there will be some reliance on a graphical approach. Other than that, we need only some basic trigonometry and calculus.
 
 
== Manipulations with geometric series ==
 
The ''geometric series'' <math>1+x+x^2+x^3+\cdots</math> is the simplest power series. The sum of the series exists when <math>|x|<1\,</math>. In fact,
 
 
<math>
 
1+x+x^2+x^3+\cdots=\frac 1{1-x}, \quad \mathrm{when}\ |x|<1. \qquad\qquad (1)
 
</math>
 
 
The general form of a ''power series'' is
 
 
<math>
 
a_0+a_1x+a_2x^2+a_3x^3+\cdots,
 
</math>
 
 
so the geometric series above is a power series in which all the coefficients <math>a_0,\,a_1,\,a_2,\,\ldots</math> are equal to 1. In
 
this case, since the series converges to <math>1/(1-x)\,</math> when <math>|x|<1\,</math>, we say that the function <math>f\,</math>, where
 
 
<math>
 
f(x)=\frac1{1-x}, \quad |x|<1,
 
</math>
 
 
has the ''series expansion'' <math>1+x+x^2+x^3+\cdots</math>, or that <math>f\,</math> is represented by this series. We are interested initially to show some
 
other functions that can be represented by power series.
 
 
Many such functions may be obtained directly from the result in (1). For example, by replacing <math>x\,</math> by <math>-x^2\,</math>, we immediately have a
 
series representation for the function <math>1/(1+x^2)\,</math>:
 
 
<math>
 
1-x^2+x^4-x^6+\cdots=\frac 1{1+x^2}, \quad \mathrm{when}\ |x|<1. \qquad\qquad (2)
 
</math>
 
 
We can differentiate both sides of (1) to give a series representation of the function <math>1/(1-x)^2\,</math>:
 
 
<math>
 
1+2x+3x^2+4x^3+\cdots=\frac 1{(1-x)^2}, \quad \mathrm{when}\ |x|<1.
 
</math>
 
 
We can also integrate both sides of (1). Multiply through by <math>-1</math> (for convenience), then write <math>t\,</math> for <math>x\,</math> and integrate with respect
 
to <math>t\,</math> from 0 to <math>x\,</math>, where <math>|x|<1\,</math>:
 
 
<math>
 
-\int_0^x (1+t+t^2+t^3+\cdots)\,dt=-\int_0^x\frac {dt}{1-t},
 
</math>
 
 
so
 
 
<math>
 
-x-\frac{x^2}2-\frac{x^3}3-\frac{x^4}4-\cdots=\log\,(1-x), \quad \mathrm{when}\ |x|<1.
 
</math>
 
 
So this gives a series representation of the function <math>\log\,(1-x)</math> for <math>|x|<1\,</math>. In the same way, from (2),
 
 
<math>
 
x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots=\tan^{-1}x, \quad \mathrm{when}\ |x|<1. \qquad\qquad (3)
 
</math>
 
 
Much of what we have done here (and will do later) requires justification, but we can leave that to the textbooks.
 
 
==The power series for the sine function==
 
 
We will show next how to find a power series representation for <math>\sin\,x</math>. In general terms, we can write
 
 
<math>
 
\sin x=a_0+a_1x+a_2x^2+a_3x^3+\cdots. \qquad\qquad (4)
 
</math>
 
 
Put <math>x=0\,</math>, and immediately we have <math>a_0=0\,</math>. Differentiate both sides of (4):
 
 
<math>
 
\cos x=a_1+2a_2x+3a_3x^2+4a_4x^3+\cdots.
 
</math>
 
 
Again put <math>x=0\,</math>, giving <math>a_1=1\,</math>. Keep differentiating and putting <math>x=0\,</math>:
 
 
<math>
 
-\sin\,x = 2a_2+3\cdot2a_3x+4\cdot3a_4x^2+5\cdot4a_5x^3+\cdots, \qquad\mathrm{so}\ a_2=0,
 
</math>
 
 
<math>
 
-\cos\,x = 3\cdot2a_3+4\cdot3\cdot2a_4x+5\cdot4\cdot3a_5x^2+6\cdot5\cdot4a_6x^3+\cdots, \qquad\mathrm{so}\ a_3=\frac{-1}{3\cdot2}=\frac{-1}{3!},
 
</math>
 
 
<math>
 
\sin\,x = 4\cdot3\cdot2a_4+5\cdot4\cdot3\cdot2a_5x+6\cdot5\cdot4\cdot3a_6x^2+\cdots, \qquad\mathrm{so}\ a_4=0,
 
</math>
 
 
<math>
 
\cos\,x = 5\cdot4\cdot3\cdot2a_5+6\cdot5\cdot4\cdot3\cdot2a_6x+7\cdot6\cdot5\cdot4\cdot3a_7x^2+\cdots, \qquad\mathrm{so}\ a_5=\frac1{5\cdot4\cdot3\cdot2}=\frac1{5!}.
 
</math>
 
 
In this way, we can find a formula for all the coefficients <math>a_0,\,a_1,\,a_2,\,\ldots</math>, namely,
 
 
<math>
 
a_{2n}=0, \qquad\qquad a_{2n+1}=\frac{(-1)^n}{(2n+1)!},
 
</math>
 
 
for <math>n=0,\,1,\,2,\,\ldots</math>. (The coefficients of even index and those of odd index are specified separately.) Thus
 
 
<math>
 
\sin\,x=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots.
 
</math>
 
 
This is the power series representation that we were after. From the way we developed it, it is reasonable that the series will represent
 
<math>\,\sin\,x</math> for values of <math>x\,</math> at and near 0 (say for <math>|x|<1\,</math>, as for all the earlier examples), so it is surprising to know that it can
 
be shown that the series represents <math>\sin\,x</math> for ''all'' values of <math>x\,</math>. Then partial sums of the series, obtained by stopping after
 
some finite number of terms, should give polynomial functions that can be used to find approximate values of the sine function, such as
 
you find in tables of trigonometric functions or as output on a calculator.
 
 
==Approximation by Chebyshev Polynomials==
 
 
For example, write
 
 
<math>
 
S_3(x)=x-\frac{x^3}{3!} \quad \mathrm{and} \quad S_5(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}.
 
</math>
 
 
The cubic polynomial <math>S_3(x)\,</math> and the quintic polynomial <math>S_5(x)\,</math> are plotted below, along with <math>\,\sin\,x</math>, all for <math>-2\le x\le 2</math>. It can be seen that these are both very good approximations for <math>-1\le x\le1</math>, say, but not so good near <math>x=\pm2</math>. The quintic
 
<math>S_5\,</math> is much better than the cubic <math>S_3\,</math> in these outer regions, as might be expected, but can we do better than <math>S_3\,</math> with some other cubic polynomial function?
 
 
When <math>x=1\,</math>, for example, the error in using the cubic is <math>\sin1-(1-\frac16)\approx0.0081</math>. We will construct a cubic polynomial function <math>F\,</math> whose values differ from those of <math>\,\sin\,x</math> by less than 0.001 for <math>|x|\le1</math>. The curve <math>y=F(x)\,</math> has been included in the graph below for <math>|x|\le2</math>, and it is clear from the graph that this curve is closer to that of <math>y=\sin\,x</math> than <math>y=S_3(x)\,</math> is, even near <math>x=\pm2</math>.
 
 
[[Bild:Chebysev fig.jpg | 800px]]
 
 
We will use ''Chebyshev polynomials'' to construct <math>F\,</math>. These are used extensively in approximation problems, as we are doing here. They are the functions <math>T_k\,</math> given by
 
 
<math>
 
T_k(x)=\cos\,k\theta, \quad\mathrm{where}\ x=\cos\,\theta,
 
</math>
 
 
for integer <math>k\ge0</math> (or you can write <math>T_k(x)=\cos\,(k\cos^{-1}\,x)</math>). By properties of the cosine, they all have domain <math>[-1,1]\,</math> and their range is also in <math>[-1,1]\,</math>. Putting <math>k=0\,</math> and <math>k=1\,</math> gives
 
 
<math>
 
T_0(x)=1, \qquad T_1(x)=x,
 
</math>
 
 
but it is not immediately apparent that the <math>T_k\,</math> are indeed polynomials for <math>k\ge2</math>. To see that this is the case, recall that
 
 
<math>
 
\cos\,(k+1)\theta=\cos\,(k\theta+\theta)=\cos\,k\theta\,\cos\,\theta-\sin\,k\theta\,\sin\,\theta,
 
</math>
 
 
<math>
 
\cos\,(k-1)\theta=\cos\,(k\theta-\theta)=\cos\,k\theta\,\cos\,\theta+\sin\,k\theta\,\sin\,\theta,
 
</math>
 
 
from which, after adding these,
 
 
<math>
 
\cos\,(k+1)\theta=2\cos\,\theta\,\cos\,k\theta-\cos\,(k-1)\theta.
 
</math>
 
 
Therefore,
 
 
<math>
 
T_{k+1}(x)=2xT_k(x)-T_{k-1}(x), \quad k\ge1.
 
</math>
 
 
Now put <math>k=1,\,2,\,3,\,\ldots</math> and obtain
 
 
<math>
 
T_2(x) = 2x^2-1, \quad
 
</math>
 
 
<math>
 
T_3(x) = 4x^3-3x, \qquad\qquad\qquad\quad\ (5)
 
</math>
 
 
<math>
 
T_4(x) = 8x^4-8x^2+1, \quad
 
</math>
 
 
<math>
 
T_5(x) = 16x^5-20x^3+5x, \qquad\qquad (6)
 
</math>
 
 
and so on, clearly obtaining a polynomial each time. As polynomials, we no longer need to think of their domains as restricted to <math>[-1,1]\,</math>.
 
 
Returning to our problem of approximating <math>\,\sin\,x</math> for <math>|x|\le1</math> with error less than 0.001, we notice first that the quintic
 
<math>S_5\,</math> has that property. In fact,
 
 
<math>
 
\left|\sin1-S_5(1)\right|=\left|\,\sin1-\left(1-\frac16+\frac1{120}\right)\right|
 
  <0.0002, \qquad\qquad (7)
 
</math>
 
 
and the theory of alternating infinite series shows that <math>|\sin\,x-S_5(x)|<0.0002</math> throughout our interval, as certainly seems
 
reasonable from the figure. We next express <math>S_5(x)\,</math> in terms of Chebyshev polynomials. Using (5) and (6), we have
 
 
<math>
 
x=T_1(x), \quad
 
</math>
 
 
<math>
 
x^3=\frac14(T_3(x)+3x)=\frac14(T_3(x)+3T_1(x)),
 
</math>
 
 
<math>
 
x^5=\frac1{16}(T_5(x)+20x^3-5x)=\frac1{16}(T_5(x)+5T_3(x)+10T_1(x)),
 
</math>
 
 
so
 
 
<math>
 
S_5(x)=x-\frac{x^3}{6}+\frac{x^5}{120} =T_1(x)-\frac16\cdot\frac14(3T_1(x)+T_3(x))+\frac1{120}\cdot\frac1{16}(10T_1(x)+5T_3(x)+T_5(x)) =\frac{169}{192}T_1(x)-\frac5{128}T_3(x)+\frac1{1920}T_5(x).
 
</math>
 
 
Since <math>|T_5(x)|\le1</math> when <math>|x|\le1</math>, omitting the term <math>\frac1{1920}T_5(x)</math> will admit a further error of at most
 
<math>\frac1{1920}<0.0006</math> which, using (7), gives a total error less than 0.0008, still within our bound of 0.001. Now,
 
 
<math>
 
\frac{169}{192}T_1(x)-\frac5{128}T_3(x)
 
  =\frac{169}{192}x-\frac5{128}(4x^3-3x)=\frac{383}{384}x-\frac5{32}x^3,
 
</math>
 
 
and the cubic function we end with is the function we called <math>F\,</math>.
 
 
We have thus shown, partly through a graphical justification, that values of the cubic function <math>F\,</math>, where
 
 
<math>
 
F(x)=\frac{383}{384}x-\frac5{32}x^3,
 
</math>
 
 
are closer to values of <math>\,\sin\,x</math>, for <math>|x|\le2</math>, than are values of the cubic function <math>x-\frac16x^3</math>, which is obtained from the power
 
series representation of <math>\ \sin\,x</math>.
 
 
==Conclusion: The Point of the Story==
 
 
The effectiveness of Chebyshev polynomials for our purpose here arises in part from a property of the cosine function. It implies that for all integers <math>k\ge0\,</math> and for <math>|x|\le1\,</math>, we have <math>|T_k(x)|\le 1</math>. Pafnuty Lvovich Chebyshev, whose surname is often given alternatively as Tchebycheff, was Russian; he introduced these polynomials in a paper in 1854.  They are denoted by <math>T_k\,</math> because of ''Tchebycheff''.
 
 
The procedure outlined above is known as ''economization'' of power series and is studied in the branch of mathematics known as [http://en.wikipedia.org/wiki/numerical_analysis numerical analysis].
 
Economization is not always necessary for evaluating the sine function. Because <math>\sin\,x</math> is approximately <math>x\,</math> when <math>|x|\,</math> is small, this is often good enough! See Chuck Allison, [http://uvu.freshsources.com/page1/page8/page16/files/sine.pdf Machine Computation of the Sine Function], for more on this. We mentioned at the beginning that the Cordic algorithm is often better still, but for evaluating other functions on a calculator, particularly <math>\tan^{-1}\,x</math>, which has the ''slowly converging'' power series expansion found in (3), economization is considered to be essential.
 
 
Perhaps the point of the story is that the obvious can very often be improved upon.
 

Version vom 26. Juni 2010, 02:51 Uhr