A brief introduction to Taylor series

buraian
8 min readAug 13, 2020

The Taylor series is a method of turning ‘nice’ non polynomial functions into polynomial functions. By nice I mean: trigonometric, exponential, logarithmic. Now, this all probably sounds too good to be true and you may scratch your head in disbelief and ask yourself “How could it be possible to do such a thing?” And I say to that, sometimes dreams can come true.

0. A deep relation between derivatives and coefficients of a polynomial

Consider a general polynomial p(x) of degree m, then it must be in the form

Now, the question I propose is: how do the coefficients relate to polynomial’s derivative evaluated at a point? To start, we evaluate our polynomial at x = 0. Then, all terms having any non zero powers of ‘x’ die off.”

Suppose we took a derivative of both sides of our original polynomial,

If we were again to put x=0 then we get a result similar to what happened when we put x=0 in the original polynomial,

And, we do that again to find for the third coefficient (a_2)

H.O.T means higher-order terms

Be careful, it is slightly different now because we have a two downstairs,

Right, but how would we get the coefficient of the ‘kth’ term? To motivate it, I introduce another induction-based result.

Let us consider,

Now, the first derivative of this is,

I have used the factorial notation

And the second derivative

with some inductional thinking, we can say that the ‘jth’ derivative would be given as :

(note: The exponent on the left side means jth derivative not g to the jth power)

For the kth coefficient, with our newly found result, we put j=k

Right, so let us bring back our original polynomial p(x),

The kth derivative of both sides,

The thing to notice here is that the terms which had exponent greater than ‘k’ still have an ‘x’ on them, so when we evaluate the kth derivative at x=0, then all of those terms vanish. Hence we get a formula relating kth coefficient and evaluation of kth derivative of the polynomial evaluated at zero.

Fantastic, so here we can see that, in the simplified form of a polynomial, all the coefficients of the monomial terms have a coefficient of the kth derivative of the polynomial evaluated at zero divided by k factorial. Now let’s rewrite the polynomial using the general formula we got for its coefficients,

Our last term is x raised to m because our premise was that the polynomial is one of degree ‘m’.

1. Motivating the Maclaurin theorem

Consider the function,

and suppose I wanted to break this function into a sum of polynomials. To start, we consider the graph of sin x as shown below

Sine Curve... A true classic

Now, I propose that we can write this as as a weighted sum of other monomials,

A graphical representation of combining monomials to generate sin x curve

The question is how to figure out the weights and how to figure out how many terms there would be needed in this polynomial mime artist of our sin x curve.

We start by equating sin x to ‘nth’ degree polynomial.

The problem for us, is finding the “weights”, as in the coefficients of the polynomial… well why don’t we fall back into the previous result and say that the polynomial coefficients are determined to sin x and it’s derivatives? As in, if sine really had a polynomial then we should be able to set up the same relation between its derivatives evaluated at zero and coefficients of the polynomial.

The smart way to calculate the coefficient of evaluating the derivatives is to create a table as I have shown below,

Please understand, this is not conceptually important, simply a way to be systematic in practical calculations.

Right, so we found the coefficients till the fourth term. Plugging them back into the polynomial expression we had developed initially

And with induction,

One may see this worry that the series goes on forever. And no one wants to calculate an infinite sum, and, it turns out no one really has to calculate the infinite sum because the series converges to the value in just a few terms! Let’s take a finite amount of terms on the left side and see how the ‘fit’ of the polynomial curve varies as we take more and more terms. Below I have shown a gif of how the polynomial gets better fitting to the sine curve about x=0 as we take more and more terms.

The width of the fit increases with more and more terms!

Now, I wish to discuss a common misconception that the value of series should be infinite because we take infinite terms, but one may notice that as we take higher and higher terms, slowly and steadily with more terms, the factorial starts growing faster and faster than the x term with the exponent on top.

So what happens is that after some kth term, the sum of series isn’t much affected if you take higher terms or not. For example, suppose our x=2, then just when k=4, the inequality becomes true, precisely put as an equation, so after the fourth term in our polynomial, we can start neglecting stuff because they start becoming very small. Hence, we can truncate the series after a certain point.

What we have done here is something called the Maclaurin series, but what we wanted was the Taylor series. For the Taylor series, instead of having the higher-order polynomial terms go to zero at x=0, we instead make them go to zero at a point x=a, so that our coefficients are modelled by the derivatives of our function around the point x=a.

## 2. Taylor series

That is if we shifted the polynomial such that all of the nonconstant terms go to zero at a point x=a,

And, in a motivated spirit put x=a,

Skipping the formalities of an algebra similar to that which we have done before, we could show that the coefficients of this polynomial would be as follows

Now, using this formula, we can the Taylor series of sin(x) as

Now, how does our polynomial change as we change ‘a’ relative to ‘sin x’, well in the original polynomial we put a=0 ( the Maclaurin), and hence the polynomial became better fitting to the polynomial around x=0, but suppose we choose some other ‘a’, say a=1, then the polynomial would be better on the sin x curve around x=1. Below I have shown a gif of how that would play out.

The purple line is x=a, as the purple line moves, the point where we create the series changes. And the red curve is the sine curve over which the Polynomial curve looks like its having a wobbly ride over

And, finally, with generality, I present the formula for turning a general ‘nice’ function f(x) into a polynomial which should approximate well for a point nearby to x=a

3. Applications

The most profound application of this is that we can easily derive the famous Euler’s identity using it. Doing some work, we find the Maclaurin of some famous functions (all of these are nice functions, which give the correct output value for a given input if we take enough terms of series)

Now suppose we plug x=it, where i is the imaginary unit, into the series of the exponential and then do some sneaky ‘rearrangement’…

Ta-da! Oh, wait HOLY *@#!! DID WE JUST PROVE EULER'S?!?!

Now, to get the identity they often reference in pop math put replace everywhere you see at with a pi!

End.

Hope you enjoyed my article and now share the same love for the Taylor series as I do

--

--