Next: Experimental comparison. Up: Multiplying. Previous: Grade-school multiplying.

# Divide-and-conquer multiplication

There is a faster way to multiply, though, caled the divide-and-conquer approach. The introduction of the technique is attributed to a 1962 paper by Karatsuba, and indeed it is sometimes called Karatusba multiplication.

## Algorithm

With divide-and-conquer multiplication, we split each of the numbers into two halves, each with n/2 digits. I'll call the two numbers we're trying to multiply a and b, with the two halves of a being aL (the left or upper half) and aR (the right or lower half) and the two halves of b being bL and bR.

Basically, we can multiply these two numbers as follows.

```             aL        aR
x   bL        bR
-----------------
aLbR      aRbR
+ aLbL      aRbL
--------------------------
aLbL (aLbR + aRbL)  aRbR
```
That image is just a picture of the idea, but more formally, the derivation works as follows.
```ab = (aL 10n/2 + aR) (bL 10n/2 + bR)
= aL bL 10n + aL bR 10n/2 + aR bL 10n/2 + aR bR
= aL bL 10n + (aL bR + aR bL) 10n/2 + aR bR
```
Thus, in order to multiply a pair of n-digit numbers, we can recursively multiply four pairs of n/2-digit numbers. The rest of the operations involved are all O(n) operations. (Multiplying by 10n may look like a multiplication (and hence not O(n)), but really it's just a matter of appending n zeroes onto the number, which takes O(n) time.)

That's fine as far as it goes, but it turns out that it's not far enough: If you write down the recurrence and solve it, it turns out to solve to O(n2), which is what we had from grade school. And this algorithm is much more complicated. Not a very encouraging result.

But there turns out to be a very clever approach, we permits us to reduce the number of n/2-digit multiplications from four to three! This clever idea yields a better result.

The idea works as follows: We're trying to compute

```aL bL 10n + (aL bR + aR bL) 10n/2 + aR bR
```
What we'll do is compute the following three products using recursive calls.
```x1 = aL bL
x2 = aR bR
x3 = (aL + aR) (bL + bR)
```
These have all the information that we want, since the following is true.
```x1 10n + (x3 - x1 - x2) 10n/2 + x2
= aL bL 10n + ((aL bL + aL bR + aR bL + aR bR) - aL bL - aR bR) 10n/2 + aR bR
= aL bL 10n + (aL bR + aR bL) 10n/2 + aR bR
```
And we already reason that this last is equal to the product of a and b.

In pseudocode, I might write down the algorithm this way. (This is just pseudocode, even though it may look like Java. But the Java would be quite a bit more complex, without really giving you much of an idea.)

```BigInteger multiply(BigInteger a, BigInteger b) {
int n = max(number of digits in a, number of digits in b)
if(n == 1) {
return a.intValue() * b.intValue();
} else {
BigInteger aR = bottom n/2 digits of a;
BigInteger aL = top remaining digits of a;
BigInteger bR = bottom n/2 digits of b;
BigInteger bL = top remaining digits of b;
BigInteger x1 = Multiply(aL, bL);
BigInteger x2 = Multiply(aR, bR);
BigInteger x3 = Multiply(aL + bL, aR + bR);
return x1 * pow(10, n) + (x3 - x1 - x2) * pow(10, n / 2) + x2;
}
}
```

Let's do an actual multiplication to illustrate how this works. I'm going to draw a recursion tree, labeling the edges with the final values computed by each node of the tree.

It's not too convincing that this is any better than the grade school method. It's really quite complicated. But, for very large numbers, the grade school algorithm begins to break down more quickly than this algorithm. I don't recommend using it for typically-sized numbers, though.

## Theoretical analysis

We're going to do the same form of analysis we did for Mergesort: We'll draw a recursion tree, labeling each node with the amount of time consumed outside the recursive calls, and then we'll find the sum of all the nodes of the recursion tree to get the total time.

All of the calculations of divide-and-conquer multiplication take O(n) time, except the three recursive multiplications. The three multiplications are to n/2-digit numbers. (Actually, the last multiplicaiton could be to (n/2+1)-digit number, but this extra digit turns out not to affect the analysis. It complicates the analysis considerably, and so we'll ignore it.)

So here's the recursion tree. I'll label each node with the number of digits in each number of the pair being multiplied.

```                                                      n
n/2                                  n/2                                  n/2
n/4         n/4         n/4          n/4         n/4         n/4          n/4         n/4         n/4
n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/8  n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/8  n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/8
:   :   :   :   :   :   :   :   :    :   :   :   :   :   :   :   :   :    :   :   :   :   :   :   :   :   :
1   1   1   1   1   1   1   1   1    1   1   1   1   1   1   1   1   1    1   1   1   1   1   1   1   1   1
```
As with Mergesort, there are 1 + log2 n levels.

The times for the first level sum to n; the times for the second sum to (3/2)n; for the third, (9/4)n; for the fourth, (27/8)n; and so on, each time going up by a factor of 3/2. The last level will sum to (3/2)log2 n n.

We're going to simplify the total over all levels.

```     3        3  2       3  3            3  log2 n
n + --- n + (---)  n + (---) n + ... + (---)       n
2        2          2               2

3      3  2     3  3          3  log2 n
= n ( 1 + --- + (---)  + (---) + ... + (---)       )
2      2        2             2
```
This last is a finite geometric series, of the form 1 + r + r2 + ... + rk. The following a well-known mathematical fact (you may have learned it in calculus):
```                              k+1
2    3          k   r    - 1
1 + r + r  + r  + ... + r  = --------
r - 1
```
We're going to apply this fact now to further simplify our expression for the total between the levels, using r = 3/2
```         1+log2 n
(3/2)         - 1       (  3    3  log2 n     )
n ------------------- = 2 n ( --- (---)       - 1 )
3/2 - 1            (  2    2             )

log2 n
3  log2 n              3                  log2 n
= 3 n (---)       - 2 n = 3 n --------- - 2n = 3 3       - 2n
2                       log2 n
2

log2 3           1.585
= 3 n       - 2n = O(n     )
```
This final answer is quite a bit better than the O(n2) result we had from the multiplication technique we learned in grade school.

(The last step, where we replaced 3log2 n with nlog2 3, comes from an identity I know that xlogb y = ylogb x. It can be proven with the following reasoning.

```(logb x) (logb y) = (logb y) (logb x)

logb x          logb y
logb y        = logb x

logb x          logb y
y        =      x
```
The first step follows from the identity that m logb n = logb nm.)

(Incidentally, divide-and-conquer multiplication isn't the fastest known multiplication algorithm. The best known result is an O(n log n) algorithm. It's quite a bit more complicated, and I'm not sure if it's practical for reasonably sized numbers.)

## Implementation tricks

It happens that I've written one of the faster divide-and-conquer multipliers out there. Implementing algorithms quickly is a bit of a trick. There are three tricks that my implemantion uses.

• In recurring, the base case is actually when there are 8 digits, at which the code breaks down and does grade-school multiplication. This is because the grade-school technique actually is faster for small numbers - the big-O analysis only applies to really big values of n. The choice of 8 for the number of digits is an experimentally determined value; I tried several values, and 8 was fastest on my computer.

• Carries are deferred until after the multiplication is complete. So, actually, as it's doing the multiplication, some of the ``digits'' are more than 10. It's still a base-10 representation, but the algorithm doesn't need the digits to be between 0 and 9. Thus, the implementation uses CC to represent the number 12x10 + 12 = 132.

This saves in two respects: First, it means that we avoid extra, unnecessary computation while the recursion is going on, saving all the carrying to one quick pass at the end. (It would amount to several passes if we did the carry as we go along.) And second, it means we don't have to worry about the annoying extra digit that can pop up in computing x3.

• A simple implementation of divide-and-conquer multiplication would often allocate new arrays, basically for each recursive call. Memory allocation can get pretty complex, however, and sometimes takes quite a while. My implementation plays some pretty heavy tricks to avoid allocating memory. In particular, it allocates 6n array entries at the very beginning, before starting the recursion, and it aggressively reuses these array entries throughout the recursion process. This saves the overhead of continually allocating arrays.

Next: Experimental comparison. Up: Multiplying. Previous: Grade-school multiplying.