This is technically not an O(n lg n) algorithm. Any O(n lg n) polynomial multiplication algorithm can be trivially turned into an O(n lg n) multiplication algorithm (just throw in some carries at the end), but O(n lg n) multiplication is an open problem.
In this case the issue is that you need more and more precision for the FFT as the inputs get larger, or else you'll get the wrong answer. I think it ends up being an O(n log^2 n) algorithm. Still pretty good; worth knowing about for sure. The constant factor on the precision might be bad enough that you can trigger the problem for reasonable inputs if you're using floats (e.g. a million coefficients, instead of an octillion coefficients).
On the other hand, many algorithms and data structures have hidden logarithmic factors (e.g. hash tables, sorting) that we ignore [1]. YMMV.
The best complexity these days [1] is essentially n log n. More precisely, O( n log n 8^(log* n) ), where log* is the extremely slow-growing iterated logarithm. The original Schönhage-Strassen algorithm is O(n log n log log n).
Chapter 1 [1] of Chee Yap's "Fundamental Problems of Algorithmic Algebra" [2] gives integer multiplication at O(n lg(n) lg(lg(n))). To prove that bound a "discrete Fourier transform" is used using modular arithmetic over 2^L + 1 (for suitable L).
As you say, you can always convert to an integer multiplication problem so it stands at O(n lg(n) lg(lg(n))) (not O(n lg^2(n))).
IIRC if your polynomials have integer coefficients with a bounded number of digits you can use FFT on finite rings and get an actual O(n log n) algorithm.
The number of digits needed to hold an output coefficient (and intermediate coefficients during the transform) increases with the number of input coefficients (because so many get added together). This forces the size of the ring to increase to accommodate the larger values, and pushes the runtime past O(n lg n).
This is not quite correct. A finite ring can only have finitely many primitive roots of unity, so the length of the FFTs you can compute in a ring of fixed size is bounded. To compute arbitrarily long convolutions over a finite ring, you need to extend the ring with more roots of unity (for example, using the Schönhage-Strassen trick), and this increases the complexity.
I think you're right, but it depends on what's your computational model. In word-RAM you usually assume that log n < w where w is the machine word, and you can do arithmetical operations on words in O(1), so if your coefficients fit in w bits the complexity is O(n log n). You could say it's a trick but it's a convenient trick, since on a real-world machines w is much larger than you need (64). I don't know if you can make the same argument about floating point computations.
Fun article, but it was missing a little bit that took me a minute to get. So if you're already familiar with fourier transforms, this might be the missing piece to this article:
Multiplication of polynomials when expressed in the domain of "array of coefficients" is a convolution operation: really you're summing (AB)_k = sum_{i=0}^k A_i B_{k-i} to find the kth coefficient of the result. The indices for B go backwards to those of A and they add up to the fixed quantity k, so we're just looking at a discrete convolution of these two arrays.
And what does Fourier Transform do? It turns convolutions into point-wise multiplication in the 'time' domain. The convolution started as an n^2 operation but the point-wise multiplication is n, so we're left with the FFT time of n log n. So this is another instance of the "formulate problem as a convolution, FFT, perform point-wise multiplication in the time domain, inverse FFT, done" class of algorithms.
(I got stuck a while trying to figure out how polynomial multiplication is convolution. Literally it's point-wise multiplication of two functions, not a convolution, so everything seemed backwards.)
This is mentioned in the article in the problem explanation section, but quickly and just passing by. I'll put it right at the beginning where it belongs. Thanks!
I had not understood properly on first read of your comment, but you did not mean what is on the post at all. I'll include this insight somehow :)
This blog post is doing precise integer arithmetic without rounding, and dealing with polynomials of tiny degree. But for more general uses...
Anyone trying to work with polynomials as continuous functions using floating point arithmetic (or trying to work efficiently with piecewise continuous functions in general) should check out Chebfun, http://www.chebfun.org
People should avoid working with polynomials expressed in the monomial basis if possible, unless dealing with a function defined on the complex unit circle. The Chebyshev basis is generally faster and more accurate for polynomials defined on some real interval, and makes it practical to compute with polynomials of very high degree. http://www.chebfun.org/examples/linalg/CondNos.htmlhttp://www.chebfun.org/examples/roots/RandomPolynomials.html
> This is the operation called discrete convolution, and it’s the equivalent of multiplying two polynomials in coefficient form.
As a physicist my first thought would be that convolution is simply multiplying in Fourier-space anyway, no need to think about polynomials. I didn't know the application of FFT for polynomials though so thanks for that.
Can anybody guess in which way he used convolution for solving what is generally known as the change-making problem?
Wikipedia [1] mentions a "probabilistic convolution tree", but that seems much more involved.
Edit: Solved. I've missed that the problem only deals with change amounts that can be reached with one or two coins. So a single convolution is sufficient in this specific case.
Yes, but integers are a specialization of polynomials. For example, the number 263 can be written as 2x^2 + 6x + 3, where x=10.
(Note that for integers, a post-processing step is necessary to account for coefficients that are larger than the number base, but I guess this is relatively inexpensive).
Indeed, having a fast way to multiply in Z[x] gives you a fast way to multiply in Z and vice versa. In the reverse direction, you can encode the polynomial f = 2x^2 + 6x + 3 as 30602, and 30602^2 = 936482404 tells you that f^2 = 4x^4 + 24x^3 + 48x^2 + 36x + 9 (of course, on a computer you do this in binary as opposed to decimal).
This correspondence is often exploited in both directions. If you ask a sufficiently sophisticated computer algebra system (SageMath, for example) to multiply two large polynomials with integer or rational coefficients, chances are that it will reduce the problem to multiplying two large integers, then reduce that problem to multiplying two large polynomials (but different from the ones you started with), and then proceed using recursive polynomial and integer multiplications to compute that product... (i.e. going through a sequence of transformations Z[x] -> Z -> Z[x] -> ...)
This is completely wrong. Big-theta does mean it's an upper bound, and also a lower bound. It certainly does not mean it's the exact number of steps; for example, 5x = Θ(3x), but 5x is more than 3x for all positive x.
Since the parent removed their comment, the gist was that they described f being big-theta of g as f growing exactly as fast as g. this is stricter than what big-theta actually means and is actually defined as "on the order of" or in notation as f ~ g.
I realize that this not related to the topic of the post at all, but... why on earth would you (try to) make a pun and then add "(Haha, get it?)" at the end? At least for me, that takes all the fun out of getting a joke :(
I'm not a native English speaker and I had some trouble understanding that pun.
> So how do we do it? We are going to have to talk a bit about the representation of polynomials in computer memory first. (Haha get it?)
Without that hint I would have totally missed it.
Also, this could be a joke on the joke. Probably the author didn't really think this pun is funny, and made fun of themselves by overstating how funny that pun is. Humor is a large field with lots of varieties. (And no, I will not spoil that this is a maths pun.)
Can please somebody explain what the pun is supposed to be for us who definitely don't see anything except the normal sentence? That he used "talk a bit" in the sentence that announces consideration of how computer memory works? I'm really disappointed then.
Yes, it's this pun. I did not intend it as a pun, but then I read what I'd written and chuckled. I thought I myself would miss it when reading someone else's article (I'd read with less of a critical eye for form), so I made sure to mark it.
Thanks! Please do write about the task for which you had to use the presented solution. Is the expected input on which the problem is evaluated really like "100K of golf holes and 100K of possible robot distances"? I don't know anything about these contests and I was not able to find the input on which the program is going to be measured (if I understand the speed is measured?)
In this case the issue is that you need more and more precision for the FFT as the inputs get larger, or else you'll get the wrong answer. I think it ends up being an O(n log^2 n) algorithm. Still pretty good; worth knowing about for sure. The constant factor on the precision might be bad enough that you can trigger the problem for reasonable inputs if you're using floats (e.g. a million coefficients, instead of an octillion coefficients).
On the other hand, many algorithms and data structures have hidden logarithmic factors (e.g. hash tables, sorting) that we ignore [1]. YMMV.
1: http://blog.computationalcomplexity.org/2009/05/shaving-logs...