Calculating sin(x) / x
Bear Giles | April 20, 2014I’ve been busy with my new job so I haven’t been able to spend time on the major themes – time to catch up on some one-off topics.
Every so often I see a blog discussion of calculating sin(x) / x as an example of the horrors of numerical methods. It’s actually a good example of why most if not all computer science programs require a year of calculus. 🙂
(I may be biased since I my undergraduate degrees are in math and physics, not computer science, so this is all very basic material for me.)
L’Hospital’s Rule
There’s an obvious place where sin(x) / x (also known as sinc(x)) is problematic: x = 0. It appears to be division-by-zero and in fact it’s usually explicitly defined as equal to 1 at that point.
Can we do better (for varying definitions of “better”)?
We can start with L’Hospital’s Rule:
limx→0 f(x)/g(x) = limx→0 f'(x)/g'(x)
In this case
limx→0 sin(x)/x = limx→0 cos(x)/1 = cos(0) = 1
This does not mean that sinc(x) is equal to 1 at x = 0. It just means that the closer you get to zero, from either side, the closer the value gets to 1.
The existence of the limit also does not mean that sinc(x) doesn’t do funny stuff as you approach zero. In this case we know it’s well-behaved but in general you’ll want to double-check that.
Taylor Series Expansion
We have an additional tool available: the Taylor series around x = 0. Specifically we know that the Taylor series of sin(x) around x = 0 is:
sin(x) = x – x3/3! + x5/5! – x7/7! + …
We can immediately use term-wise division by x to get
sinc(x) = 1 – x2/3! + x4/5! – x6/7! + …
With sufficiently small x we can use the first-order approximation:
sinc(x) = 1 – x2/6 (first-order approximation)
Sinc(x) is a close approximation of a simple parabola near the origin.
Computing sinc(x)
Finally we have the question of what happens when we try to compute sin(x)/x using a computer.
Floating point numbers are almost always represented using the IEEE 754-2008 representation. For a double that’s 52(+1) bits. (The +1 is implicit.)
I don’t know if sin(x) is computed in hardware, using the Taylor series, using CORDIC functions, or by consulting the Oracle at Delphi. It doesn’t matter since we know, from the Taylor series, that the limited precision of floating point numbers means that at some value |x| < ε the higher terms of the Taylor series will drop off and
sin(x) = x(1 – x2/3!) (computed)
That second term will drop out when x2/3! < 2-53, or roughly |x| < 2-25. Beyond that point the computed sin(x) is equal to x and sinc(x) will be equal to 1 until you hit x = 0.
We can avoid special handling of x = 0 by noting that we’ve already factored the calculation above:
sinc(x) = sin(x) / x = 1 – x2/6 (computed)
If we cut over to this calculation when the next term in the expansion drops out (x4/5! < 2-53, or roughly |x| < 2-10) then we don’t have to handle x = 0 separately.
It’s debatable whether it’s cleaner to treat x = 0 or |x| < 2-10 as the special case but it shows how we can use basic analysis to turn an apparent problem into “no big deal”.
It’s important to note that the fact that we’re dividing by a small number is irrelevant. What’s critical is the precision of numbers involved. A small number is not a problem a priori, a small number due to subtraction of two nearly identical values is.
Calculating (x – sin(x)) / x
This brings up a variant of the earlier problem. x and sin(x) are nearly identical for sufficiently small x. This can cause problems due to loss of precision.
Our first step is to verify that there is a well-defined limit at 0. We could use L’Hospital’s Rule or simply observe:
(x – sin(x)) / x = (x/x) – (sin(x)/x) = 1 – sinc(x)
We can now use the Taylor series to see
(x – sin(x)) / x = 1 – sinc(x) = 1 – (1 – x2/3! + …) = x2/3! – …
This calculation will return 0 when |x| < 2-511. It is 2-511, not 2-25, since the maximum precision of an IEEE double is only 52(+1) bits but the minimum exponent is -1022. That means there’s a broad region (in terms of orders of magnitude) where the direct subtraction will return zero but the Taylor series will return a small but nonzero value.
Conclusion
This is a simple example but it illustrates a common problem in numeric methods. Some equations look like they’ll be a problem, e.g., division by zero or subtracting nearly identical values, but rephrasing the problem can produce much better results.
A fair question is whether this matters. Why not make use the most direct approach since the numbers involved are so small? We need to ask two questions:
- Is this value used in subsequent calculations or an interation? If so the loss of few bits of precision can result in significant errors downstream.
- Is the most precise value required? Are there contractual or legal requirements?
The bottom line is that this is not our call to make. We can and should ask for clarification before investing a significant amount of time into an analysis but these examples should only take a few hours for someone comfortable writing numerical code.
Footnote: Quadratic Equations
I should also mention quadratic equation. We all know the form taught in high school but there are two problematic scenarios:
First, when b2 ≅ 4ac then the equation involves subtraction of two nearly equal values and there will be a significant loss of precision.
Second, when b2 ≫ 4ac then the contribution by the second will be diminished or lost due to the limited precision of a floating point number. In this case where will be full precision but quantization noise.
In both cases there are alternative formulations that can give better results.