An Infinite Quantity Of Math
- Start and Map pages
- MainSequence 1: Cloud chambers and cosmic rays
- MainSequence 2: The stardust hypothesis
- MainSequence 3: Little toy universes
- MainSequence 4: The photoelectric effect
- MainSequence 5: Casting dice
- MainSequence 6: A physical calculation
- MainSequence 7: Photon deflection
- MainSequence 8: Electromagnetic prisms
- MainSequence 9: Some polar science
- MainSequence 10: Watershed hydrology
- Main Sequence 11: Lunar shenanigans
- Main Sequence 12: An infinite quantity of mathematics
- Main Sequence 13: Remote sensing of the environment
- Main Sequence 14: The Rock, some math for a younger person
A lot of hay can be made with polynomials, even if they happen to have an infinite number of terms (infinite degree). In fact I'm a fan of any math that makes use of
On this page I'll write down a couple of examples. The underlying idea is that if you are accustomed to certain functions being magical (like sine and cosine and logarithm) it is interesting to see that they have infinite-degree polynomial representations that get you some of their properties through polynomial-type manipulation.
Proving the trigonometric identity sin2x + cos2x = 1
Let's start with definitions for sine and cosine as infinite-degree polynomials:
What does this mean? First if it looks unfamiliar that's not too surprising. In traditional US mathematics curricula these functions are introduced in terms of right triangles and the unit circle. One might also see the approximation of the first couple terms for small values of x:
But the polynomials we learn about in these same courses are "separate creatures" (and rarely get above second degree). When plotting polynomials we see that they get very large for large (absolute) values of x, and so we get in the habit of thinking that polynomials sort of run off to infinity out at the edges of any given graph. What is very counter-intuitive is the notion that the way to make a polynomial not run off to infinity is to include higher and higher powers of x, which themselves run off to infinity. Ah, it makes my head spin.
The interesting thing here, with these sine and cosine definitions, is that the larger powers of x are divided by even larger denominators that will eventually scrunch the entire sum over an infinite number of terms into a finite end result. In fact, in this case, a value on [-1, 1]. One way to show that this is true for all values of x is to prove the identity we're after here.
Incidentally, taking these infinite series at their word for the moment, the series expression for sine and cosine can be pulled out of the series expression for ex (applied to eiθ). Sine and cosine can also be generated through calculus arguments, noting the resemblance of the definitions above to Taylor series... but I'm not aware of more elementary generator ideas; so for now let's just carry on. If we wanted to we could also just say these are two functions s(x) and c(x) and then show down the road that they meet all the descriptions for sine and cosine. I don't want to be so comprehensive here, but it would be a fun way to teach the subject: You start off with the definitions and after some work you say "aha! there is no difference between these functions and the familiar sine and cosine; they are the same, voila!"
As alluded to above: The denominators for each term in the series definition have a factorial in them; so larger powers of x (large i) are "drowned out" by huge denominators and make smaller and smaller contributions to the total sum. The farther we get away from the origin (the larger the value of x) the more terms it will take in the infinite series to reach this point of diminishing returns; so it is also a good thing that these terms have alternating signs: We can suspect the large values in the sum might tend to cancel one another out in order to arrive at a final sum in that [-1, 1] range. It is a very interesting exercise to create a spreadsheet of values of these polynomials, say up to i = 20 or so, and plot f(x) versus x.
But moving on to our proof: What... given the above definitions... is going to be the sum of the squares of cosine and sine of x? According to trigonometry or Pythagoras (take your pick) the sum is 1; does that hold for this alternative definition? And is it easy or difficult to see?
I will refer to these "multiplied sums" as two sum-products, the first for cosine squared and the second for sine squared. The end result will also be an infinite-degree polynomial; but notice that on multiplying out the two sum-products, both on the left and the right, the results will feature only even powers of x. Furthermore choosing a particular power of x, say degree n (where n is an even number greater than or equal to zero) we have two cases to think about for the coefficient of xn. When n = 0 only the cosine sum-product will contribute, and it will contribute exactly 1. The sine sum-product only starts contributing for n = 2, 4, 6, 8, ..., i.e. n is an even integer greater than zero. Of course the cosine sum-product contributes for these values of n as well.
Next, for n-even > 0 we can write out what the cosine sum-product and the sine sum-product terms will conribute to the coefficient of xn. Specifically when 2i + 2j = n then the cosine-squared sum-product contributes (left side of the above) and when (2i + 1) + (2j + 1) = n the sine-squared product contributes (right side). In what follows, then, we can reduce those four sums to a single sum and (by expressing j in terms of i and n) collapse the complicated products into something manageable. We now consider the resulting coefficient for some particular value of n, that is for the coefficient of xn in the resulting infinite-degree polynomial.
For some n, and for cosine squared we are concerned only with i = 0, 1, 2, ..., n/2 which in turn determines what j must be: j = n/2 - i. For sine squared we are concerned with i = 0, 1, 2, ..., (n/2)-1, and correspondingly j = n/2 - i - 1. So concentrating on a particular power n in the polynomial we can desist from writing j since we have it in terms of i and n.
Also, in comparing these two sums over i values we see that the cosine-squared part has an extra i-value, namely i = n/2 that is not present in the sine-squared sum over values of i; so we will carry that extra calculation out separately after everything else.
Now we are on the rails for the rest of the proof because we can collapse all the summing to a single sum over an i.
Notice that the funny expression at the far right is that lone cosine-squared contribution for i = n/2. From here on the expression above gets simpler and simpler, and incidentally for that n!/n! the denominator gets factored out in the next step and the numerator gets pushed into the sum over i so we can start using binomial coefficient notation.
The final steps follow by doing a bit more factoring and recognizing that the interior sum can be re-indexed (I changed from i to k) to go from 0 to n. I'm going to assert without proof that any row of Pascal's triangle, when added up with alternating signs on successive numbers, equals zero. This is easily shown for the even-number-of-elements rows because they are symmetrical. For the odd-number-of-elements rows the sum to zero can be shown by using the even row above it as a generator, taking care with the signs. So for the ninth row we have, for example, 1 - 8 + 28 - 56 + 70 - 56 + 28 - 8 + 1 = 0.
Ok here we go:
So that's a big relief. At least: I think it's nice that the rather peculiar definition of the trigonometric functions gives this familiar result.
I mentioned above the magical nature of sine and cosine; but perhaps I'm just overcome with emotion by the fact that they are functions expressable as infinite degree polynomials. The alternating signs + - + - between successive terms are curious but evidently necessary to keep the function values in check, on [-1, 1]. By setting them all to + we get the series definitions for hyberbolic sine and hyperbolic cosine:
Kilroy: At this point it would be pleasant to see the plots and singular values of x for these functions.
The exponential function as a series, also provided without a shred of motivation:
I started building this website under the Royal Society's slogan "Don't take anybody's word for it" and I think this would be a good time to reiterate that sentiment. Although I am writing down these lovely series expressions I don't expect anyone to accept them as true without some sort of justification. It's also the case that I don't have any such good justifications at the moment, alas.
Now, to move on: With the exponential series in hand one can check the earlier remark that the sine and cosine series constitute a decomposition of an exponential with complex exponent and can check hyberbolic sine and cosine series definitions since these are more properly defined as:
In what follows there is often a finesse: The series definitions change from stating f(x) to stating f(1+x); and one has to take care to stay within the allowed bounds for x here as well. But bounding x is an undesirable restriction -- we'd like to have series definitions for all values of x where the function is defined. So I'll try to go to some pains to arrive at generalizations.
Kilroy: Possible to get logarithm from exponential???
The following material is grist dumped here for future work; so it can safely be skipped over by the interested reader; at this point we would tend to put up a little under-construction icon.
Kilroy: Incidentally how might this connect to:
Finite geometric series:
Variants of the infinite geometric series:
with generalized binomial coefficients
Square root (A first special case of the Binomial series):
Infinite geometric series (special case 2 of the Binomial series):
Kilroy: Go ahead and connect this to telescoping series used in the Casting Dice page.
Motivating series definition for Cosine from calculus
This is a sort of non-starter but I'll keep these notes here as a reminder of a try.
Imagine a unit circle with an angle (arc length) a and an x-coordinate c where now c = cos(a).
The question is: Can you write down the integral for arc length and set that equal to a? This should be feasible, and then the limits of integration are two points on the x-axis: cosine of a and 1. The problem of course is that the integral will probably evaluate to an infinite series of cosines raised to powers; but we just want one cosine raised to the oneth power equal to an infinite series in a. I am not conversant with machinery to put the infinite series shoe on the other variable, i.e. from cosine of a to a itself.
Anyway the initial sentence is
This asserts the expansion of sqrt (1 + u) as another infinite series so now we would need to motivate that!
And by the way the right-most sum is from noting that
Anyway the remaining notes are in photos in Science and Education sub-folder Mathematics.