Let's look at the definition of a Taylor Series:

Basically, adding up all the infinitesimal rates is used to determine the change in the function. In practical usage, going out infinitely is impractical. Therefore, Taylor Series are used as approximations, with only a few terms generated.

In the definition above, there is also a term x

_{0}. This is the x-coordinate around which the Taylor Series is centered. The x-parameter for the Taylor Series is the point at which to approximate the function. At x = 0, the Taylor Series is also a Maclaurin Series. This is no different than the tangent line approximations of functions. In fact, those tangent line approximations are first degree Taylor Series. Let's compare the tangent line vs. third-degree Taylor Series approximations of the function for T(3): f(x) = 3x

^{4}- 2x

^{2}centered at x = 4.

The first-degree Taylor series can easily be determined using point slope form, which is:

y - f(4) = slope(x-4). To get the slope, simply evaluate f'(4), which is 4(3)(4

^{3}) - 4(4), or 758. So plugging in the slope, the approximation comes out to: y-736 = 758(x-4). Expanding this sets: y = 758x - 2296.

To get the third degree Taylor series approximation, it is necessary to go out to the second and third derivatives. So:

f"(4) = 36(4

^{2}) - 4 = 572

f"'(4) = 72(4) = 288

So the third degree Taylor Series approximation is:

T

_{3}(x) = 736 + 758(x-4) + 572(x-4)

^{2}/2 + 288(x-4)

^{3}/6.

Simplified, this becomes:

T

_{3}(x) = 736 + 758(x-4) + 286(x-4)

^{2}+ 48(x-4)

^{3}.

Let's compare both approximations at x = 3 to the original function:

y(4) = -22

T

_{3}(3) = 218

f(3) = 225

Obviously the tangent line approximation is nowhere even close to the actual value of f(3), while the third degree Taylor Series is only 7 less than f(3). The reason for the huge difference in approximation is because only a constant slope is taken into account, thus ignoring other changes in the function. The additional terms in the Taylor Series decrease the amplitudes of the curves, causing it to eventually converge on the desired value.

While the primary purpose of Taylor Series is for approximating functions, they also make it easier to deal with function manipulations including vertical and horizontal shifts, differentiation, and integration. For functions that have a significant number of terms, or aren't easily integrable with one of the basic integration techniques, Taylor Series are a good way to approximate the integral.

A good example of this is Integral(sin(x

^{2}) dx). While trig substitution or integration by parts are both good ways to tackle this, using a Taylor Series. The general form for a Taylor Series for sin(x) is the summation from i = 0 to infinity of (-1)

^{i}* x

^{2i+1}/ (2i+1)!. Since we can now treat sin(x) as a polynomial function, the Taylor Series for sin(x^2) comes out to the infinite summation from i = 0 to infinity of (-1)

^{i}* x

^{4i+2}/ (2i+1)!. That single term is significantly easier to integrate, leaving Integral(a, b, sin(x

^{2}) dx) = summation of i = 0 to infinity of (-1)

^{i}* x

^{4i+3}/((4i+3) * (2i+1)!). Now simply go out as many terms as desired in the new Taylor Series, and evaluate it from the limits specified in the integral.

**Conclusion**

Taylor Series are useful in both approximating amd manipulating functions. This makes it easier to apply Calculus to functions that are seemingly difficult to otherwise work with.