# DWise1: Old German Multiplication Method

 "Do the math!" - Sega 16-bit video game-set commercial, c. mid-1980's

### Abstract

Recently (September 2019) on YouTube I stumbled upon an odd offering: Altes Multiplizieren - vergessenes Wissen! Das hast du noch nie gesehen! ("Old Multiplication -- Forgotten Knowledge. You have never seen that before!"). The video series is "Lernen mit Lehrer Schmidt" ("Learning with Teacher Schmidt") and consists mainly of math videos by a math teacher in Germany, though at his web site he also covers topics such as physics, German (grammar and literature), and handicrafts. They are of course in German without subtitles.

I work through it below, but basically the method is to iteratively double one number and halve the other (always rounding down, AKA truncating), then adding together all the numbers in the doubled column for which the corresponding halved numbers are odd. At first it looks very strange and I was about to try to figure it out with an algebraic proof when I saw a comment describe it as a binary operation. With that I immediately recognized it as the standard binary multiplication method I had learned decades ago either in USAF tech school or for my computer science degree; I hadn't realized that because this one was using decimal numbers instead of binary ones. Now what he's doing makes perfect sense.

This method is certainly old. Most of the comments either said that they had never seen this before or else their grandparents had shown it to them (these were mainly middle-aged or senior themselves, so their grandparents had to have learned it in the late 1800's). What I'm still not sure of is whether this was indeed how everybody used to be taught in German schools and if this method was also used in other countries. We do know that binary mathematics go back to ancient India and Egypt, so it would make sense that a mathematician would have devised this method long ago. I have found this method described in Wikipedia as Russian Peasant Multiplication, the advantage being that you do not need to memorize multiplication tables but rather you only need to know how to multiply and divide by 2, how to add, and how to tell whether a number is odd or even.

Since for some odd reason not everybody understands German, I'm writing this page to present the method in English. Then I describe binary multiplication to show how and why this works.

### The Method

Here's the basic procedure presented in the video:
1. Write the numbers side by side with the smaller number on the left. It works just as well with the larger number on the left, but then this keeps the number of iterations smaller.
2. Successively double the number on the right and halve the number on the left until the number on the left is one (1). When you halve an odd number, just drop the fraction (ie, round down).
3. Cross out (ie, remove or ignore) all lines where the number on the left is even.
4. Add all the remaining numbers on the right (ie, all the ones where the number on the left is odd) and you get the answer.

Here are the examples from the video:

• Set up the problems and do the doubling and halving:
• ```    5 × 25        4 × 35        3 × 25       5 × 35        7 × 45        6 × 40        7 × 21      12 × 31

5 --  25      4 --  35      3 -- 25      5 --  35      7 --  45      6 --  40      7 -- 21     12 --  31
2 --  50      2 --  70      1 -- 50      2 --  70      3 --  90      3 --  80      3 -- 42      6 --  62
1 -- 100      1 -- 140                   1 -- 140      1 -- 180      1 -- 160      1 -- 84      3 -- 124
1 -- 248
```

• Eliminate the lines with an even number on the left by crossing out those lines:
• ```    5 × 25        4 × 35        3 × 25       5 × 35        7 × 45        6 × 40        7 × 21      12 × 31

5 --  25      4 --  35      3 -- 25      5 --  35      7 --  45      6 --  40      7 -- 21     12 --  31
2 --  50      2 --  70      1 -- 50      2 --  70      3 --  90      3 --  80      3 -- 42      6 --  62
1 -- 100      1 -- 140                   1 -- 140      1 -- 180      1 -- 160      1 -- 84      3 -- 124
1 -- 248

```

• Add the remaining numbers on the right, which are the ones with an odd number on the left:
• ```    5 × 25        4 × 35        3 × 25       5 × 35        7 × 45        6 × 40        7 × 21      12 × 31

5 --  25      4 --  35      3 -- 25      5 --  35      7 --  45      6 --  40      7 -- 21     12 --  31
2 --  50      2 --  70      1 -- 50      2 --  70      3 --  90      3 --  80      3 -- 42      6 --  62
1 -- 100      1 -- 140                   1 -- 140      1 -- 180      1 -- 160      1 -- 84      3 -- 124
1 -- 248
-----         -----         ----         -----         -----         -----        -----         -----
125           140           75           175           315           240          147           372
```

And so you can see that it works.

If your browser does not support the strikethrough tag that I used, then refer to this write-up as you watch the video, in which Lehrer Schmidt writes all this out, one problem at a time (yes, that really is how they write ones and sevens and nines). There you can see him crossing out the lines with an even number on the left and leaving those lines out of the addition step.

### How It Works (Multiplying in Binary)

So how does it work? I was ready to try to work it out algebraically when I noticed an entry in the comments section that this was binary multiplication. About four decades ago I went through USAF tech school in computer systems repair (AFSC305x4, Electronics Computer Systems Repairman) and then I worked on my computer science degree. In one of those experiences (I forget which, but my memory's leaning towards tech school), we went through how a computer performs integer multiplication. Basically, that operation is a series of shifts and conditional adds, which I will discuss below.

But first we need some background knowledge so that everyone can understand it.

Basic Number Theory

Don't be afraid. It's just simply how we do position-value numbers (which you just know as "numbers") and how that translates to other number bases (maybe the only new concept for you here). This basic number theory was most of the "New Math" in the mid-1960's when I had it in junior high summer school. I understood it completely within the first hour of instruction, but then we spent weeks of monotonous busy work in order to drum it into us (Trivia note: in German student slang, a teacher is a "Pauker", a drum beater, because he drums the subject matter into your head -- though now apparently when a student studies hard, "hitting the books", that's also "pauken" ("beating a drum")).

We'll work first in decimal, AKA "Base 10" because it uses ten digits which range from 0 to 9. Those digits are placed into positions in the number which define their values. The other number bases work their numbers the same way as decimal, the difference being how many digits they use which will also change the positions' values.

For example, consider the number 1234, one thousand two hundred thirty four. Going from right to left, the positions are "the one's place", "the ten's place", "the hundred's place", and "the thousand's place" (sound familiar?). So to find the value of 1234, we take each digit, multiply it by its position's value, and finally add them all together. Therefore:

1234 = 1 × 1,000 + 2 × 100 + 3 × 10 + 4 × 1 = 1,000 + 200 + 30 + 4
Yeah, it's that simple. And the New Math had us spend half the summer decomposing numbers and then performing basic arithmetic with those decomposed numbers. But it helps us to understand how to work with other number bases.

Please note that the position values are based on the powers of the number of digits in the number base, which for decimal is 10:

1 = 100; 10 = 101; 100 = 102; 1,000 = 103
Therefore:
1234 = 1 × 103 + 2 × 102 + 3 × 101 + 4 × 100 = 1,000 + 200 + 30 + 4
That more general form is the key to creating numbers in other number bases. Since our goal here is working in binary, let's examine that.

Binary, Base 2, has two digits, 1 and 0. The position values of a binary number are:

20 = 1
21 = 2
22 = 4
23 = 8
24 = 16
25 = 32
26 = 64
27 = 128
So if we take the binary number 00100111, we find that we have ones in the 32's place, the 4's place, the 2's place, and the 1's place. To convert that number to its decimal equivalent, we add the values of the ones: 32 + 4 + 2 + 1 = 39. I should also mention that with 8 bits (binary digits) you can express 256 (28) values ranging from 0 to 255.

That's what we end up doing, but what are we actually doing? We are actually placing the digits in their positions and multiplying each digit to its place value. That is exactly what we do in decimal:

001001112 = 0 × 27 + 0 × 26 + 1 × 25 + 0 × 24 + 0 × 23 + 1 × 22 + 1 × 21 + 1 × 20
= 0 + 0 + 25 + 0 + 0 + 22 + 21 + 20
= 32 + 4 + 2 + 1

001001112 = 39

This demonstrates a unique characteristic of working with binary numbers. The values of the digits (AKA "binary digits", AKA "bits") are one of two values: 1 or 0. So when you multiply a digit to its position value, then you are multiplying by 1 or by zero. One times any number is that number (eg, 1 × 42 = 42) and zero times any number is zero. So to find the value of a binary number you do not need to actually do any multiplying, but rather all you need to do is see which positions contain ones and simply add all such position values together. Not only is it very simple, but that same unique characteristic of working in binary will come into play in binary multiplication.

I'll quickly mention two other bases that programmers have commonly worked with: octal (Base 8) and hexadecimal (Base 16). The reason why they are so common is because you can easily convert back and forth between them and binary. With three bits you can represent 8 digits (0 to 7), which by definition is octal, and with 4 bits you can represent 16 digits (0 to F), which by definition is hexadecimal. To convert from binary to octal or hexadecimal you group the bits by 3 or by 4, respectively, and simply read them. Every experienced programmer has the binary code for those digits memorized from repeated use, since we almost always use either octal or hex to work with binary values (hexadecimal, AKA "hex", is most commonly used at present). These two number bases are of no use in this discussion, but if you're interested in learning more about them then follow those links to their Wikipedia articles.

Long Multiplication

Since understanding binary multiplication by doing it by hand is derived from decimal long multiplication, we'll review decimal first.

Let's start with an example problem: 4,096 × 1,024 = 4,194,304 (yes, that is 212 × 210 = 222; I am a programmer after all, so nice round numbers like these powers of two come immediately to mind). We will need standard mathematical terminology for multiplication:

• Multiplicand -- the number being multiplied, normally placed on the first line. In this example that's the 4,096.
• Multiplier -- the number you are multiplying by, normally placed on the second line. In this example that's the 1,024.
• Product -- the result of the multiplication operation, normally placed on the final line. In this example that's the 4,194,304.

I will use the standard form that I learned in elementary school in the late 1950's in the USA. I'm qualifying this just in case they've changed how they teach it 1, or in case you're from a country where it's taught differently. So to examine that standard form, let's revert to the New Math approach of the mid-60's in order to see exactly what's happening. So then rather pedantically:

• Decompose the multiplier and write the problem thus (switching to powers of ten for illustrative purposes):
• product = 4,096 × (1,000 × 1 + 100 × 0 + 10 × 2 + 1 × 4) = 4,096 × (103 × 1 + 102 × 0 + 101 × 2 + 1001 × 4)

• Applying the commutative property to the decomposed multiplier, we reorder its terms to the order we will use in long multiplication:
• product = 4,096 × (100 × 4 + 101 × 2 + 102 × 0 + 103 × 1)

• Now apply the distributive property:
• product = 4,096 × (100 × 4 + 101 × 2 + 102 × 0 + 103 × 1)
= (4,096 × 100 × 4) + (4,096 × 101 × 2) + (4,096 × 102 × 0) + (4,096 × 103 × 1)

• Of course, the terms being multiplied by zero will be zero and hence can just be dropped out:
• product = (4,096 × 100 × 4) + (4,096 × 101 × 2) + (4,096 × 103 × 1)

• Next, in each term we multiply the multiplicand by the indicated powers of ten, which is accomplished by shifting the multiplicand left one place per zero:
• product = (4,096 × 4) + (40,960 × 2) + (4,096,000 × 1)

HINT: to multiply by a power of ten, you shift the multiplicand left that number of times; eg, to multiply by 100 (102) you shift left by two places (AKA adding two zeros on the right end).

• Now in each term we multiply the multiplicand by the multiplier's digits:
• product = (4,096 × 4) + (40,960 × 2) + (4,096,000 × 1)
product = 16,384 + 81,920 + 4,096,000

• And finally we add the terms together to get the product:
• product = 4,194,304

That is what we are doing in long multiplication albeit described in a pedantic New-Math approach. We will use this understanding to approach long multiplication in binary.

But in the meantime, let's now apply our understanding to the standard format we're all familiar with (at least the one taught when I went through the program). As we do this, use the treatment above to understand what's behind our actions in the standard format:

• Write the problem, leaving out those pesky commas:
• ```         4096    Multiplicand
× 1024    Multiplier
---------

```

• Multiply the multiplier's 1's digit (4) to the multiplicand with zero shift (ie, × 100):
• ```         4096    Multiplicand
× 1024    Multiplier
---------
16384

```

• Multiply the multiplier's 10's digit (2) to the multiplicand with a left shift of 1 (ie, × 101). Note that I am including the right-most zero in order to show that explicit multiplication by a power of ten, whereas we were taught to just shift it left and leave that space blank:
• ```         4096    Multiplicand
× 1024    Multiplier
---------
16384
81920

```

• Multiply the multiplier's 100's digit (0) to the multiplicand with a left shift of 2 (ie, × 102). Since we're multiplying by zero, we do not write anything down. However, another left shift has occurred, which we will see in the next step.
• ```         4096    Multiplicand
× 1024    Multiplier
---------
16384
81920

```
Of course, we're taught to just skip that step when the multiplier's digit is zero, but in reality we are actually multiplying the multiplicand by zero, which is zero, and add that to the intermediate results, which does not change them. So we're effectively skipping that step, but we need to keep in mind what's actually happening, the reason why we're effectively skipping that step.

• Multiply the multiplier's 1,000's digit (1) to the multiplicand with a left shift of 3 (ie, × 103):
• ```         4096    Multiplicand
× 1024    Multiplier
---------
16384
81920
4096000

```

• Add the terms to get the product:
• ```         4096    Multiplicand
× 1024    Multiplier
---------
16384
81920
4096000
---------
4194304    Product
```

And just so that you can see it in a more familiar form, here is the final result with zeros replaced by blanks:

```         4096    Multiplicand
× 1024    Multiplier
---------
16384
8192
4096
---------
4194304    Product
```

Now as we move on to long multiplication in binary, we will see that it works in the exact same way. However, there are a couple things that makes it much easier and which are reflected in this "Old Multiplication" method.

Footnote 1:
In 1964 Tom Lehrer wrote his satirical song, New Math, for the US version of the TV weekly news satire, That Was The Week That Was (TW3). The song was released on Tom Lehrer's album, That Was The Year That Was, along with his other songs for TW3 and his performance of the song in concert is on YouTube.

The premise of the song is that the changes in the New Math means that parents can no longer help their children with their homework, so he offers a quick lesson to bring the parents up to speed with a simple subtraction problem. After doing it in decimal (Base 10), he notices that the book asks that we do it in Base 8 (octal -- "Don't worry. Base 8 is just like Base 10 if you're missing two fingers.").

While working with that problem, he reveals an earlier case of a change in how borrowing in subtraction was done. I was taught to "borrow 1" (actually 10 because of position values) from the top digit immediately to the left. But (in 1964) if you're over 35 or went to a private school, you would add 1 to the bottom digit immediately to the left. My mother told me that that was how she was taught to do it. I have a 1914 arithmetic primer for grades 1 through 6 which uses that same technique my mother had been taught.

Another example was when my older sister was learning algebra, so our father would help her with her homework. Then one day she brought home a note from her teacher asking our father to stop helping her because he was using different methods from what they were teaching.

Binary Multiplication

Long multiplication in binary works like long multiplication in decimal, except for two things:

1. Since the place values are in powers of two, when you shift a binary number to the left, then you are multiplying it by 2. And when you shift to the right, you are dividing by 2. This fact ties in directly to the "Old Multiplication" method where we repeatedly double the multiplicand and halve the multiplier.

2. The only things you will multiply the multiplicand by will be either zero or one. Multiplying anything by zero is always zero. Multiplying anything by one is always itself -- just keep in mind that for each iteration we will have shifted the multiplicand to the left.

That second item, that you're multiplying by only 0 or 1, both of which are trivial to do. That is one reason why Lehrer Schmidt's method is so easy to use.

These two unique characteristics of binary multiplication is why computer students and professionals describe it as "Shift-and-add". I will demonstrate this below.

Addition of two bits (A and B) can be represented by the following truth table:

A + B
ABSumCarry
0000
0110
1010
1101

Note the last line where you add 1 + 1 = 2, which in binary would be 102, just as in decimal 5 + 5 = 1010. In both cases, the sum in that position is zero with a 1 being carried to the next higher position.

Since keeping track of carries can get messy in long-column addition in binary, I will modify the long-multiplication method slightly by performing an addition in each iteration, thus maintaining a running subtotal. This will also reflect how the integer multiplication algorithm is performed on a computer.

For our example, let's use one of Lehrer Schmidt's examples: 12 × 31. In binary, 1210 = 11002 and 3110 = 111112. For this example I will not show leading zeros, but, since a computer's registers are of a fixed length, in a computer those leading zeros will be there. The example:

• Set up the problem:
• ```      11002 × 111112

11111    Multiplicand
× 1100    Multiplier
---------

```

• Multiply the multiplier's 1's digit (0) to the multiplicand with zero shift (ie, � 20) Since we're multiplying by zero, we do not write anything down.
• ```        11111    Multiplicand
× 1100    Multiplier
---------

```

• Multiply the multiplier's 2's digit (0) to the multiplicand with a left shift of one (ie, � 21) Since we're multiplying by zero, we do not write anything down.
• ```        11111    Multiplicand
× 1100    Multiplier
---------

```

• Multiply the multiplier's 4's digit (1) to the multiplicand with a left shift of two (ie, � 22) Since we're multiplying by one, write the shifted multiplicand down. Since it's the first term, do not add (there's nothing to add it to except zero).
• ```        11111    Multiplicand
× 1100    Multiplier
---------
1111100

```

• Multiply the multiplier's 8's digit (1) to the multiplicand with a left shift of three (ie, � 23) Since we're multiplying by one, write the shifted multiplicand down and add it to the previous term.
```        11111    Multiplicand
× 1100    Multiplier
---------
1111100
11111000
---------
101110100

```

• Since the remaining higher-order bits of the multiplier are all zero, the subtotal is the final sum and hence the product.
• ```        11111    Multiplicand
× 1100    Multiplier
---------
1111100
11111000
---------
101110100    Product

1011101002 = 37210

```
• Refer to the description of Lehrer Schmidt's method and look at this problem, 12 � 31, and note:

1. We get the same product, 372.

2. Lehrer Schmidt's example ignored the first two multiplications (evens) and only uses the last two (odds), just as we just did.

So this demonstrates that Lehrer Schmidt's method is using binary multiplication. However, in order to more closely match what he's doing we need to look at how computers multiply integers.

How Computers Multiply

Here is a very brief, very simplified explanation of how computers work. This will help you understand the following explanation of how they multiply.

One of a programmer's common techniques in designing or debugging code is to "play computer" with pencil and paper. That is to say that we will test an algorithm or code function by stepping through the algorithm/code writing down the values of all the variables as the code changes them and examining the results. That is basically what Lehrer Schmidt is doing in his method. And I will do the same here.

Here is a highly simplified explanation of how computers work. In general (ie, actual mileage may vary in some specialized hardware):

• Every computer has at least one CPU (central processing unit), the actual "brain" of the computer which performs every instruction in a program.

• Every CPU has at least one ALU (arithmetic logic unit) or at least its functionality. An ALU could be a distinct unit in the CPU or else its functionality could be incorporated into the CPU's logic. An ALU performs a set of binary integer arithmetic and logic operations on binary values input into it.

• Every CPU also has a number of registers, which are used to contain binary values. A register is hardware circuitry used to hold a pre-determined number of binary digits (AKA bits) specified by the computer design (eg, 8 bits, 16 bits, 32 bits, 64 bits -- notice how they tend to follow the powers of two, though many older designs had other word sizes; IOW, although there are conventions based on valid reasons, the design is also arbitrary).

The binary numbers contained in a register can be used as integer values, parts of a floating-point value (though there's also hardware that some computers have hardware to support floating-point values and operations, floating-point units (FPUs), or else they will emulate an FPU in software), the address of a memory location containing data, the address of a memory location containing the address of a memory location containing data, etc ad infinitum (constrained by practical hardware and software limitations), character data, status flags, special data formats, etc. Basically, virtually all the work that a computer does is done in the CPU registers. A particular pattern of bits has no inherent properties to tell you what they represent. Bits are bits. The significance of a particular grouping of bits depends entirely on how the software uses those bits. Indeed, the same grouping of bits can be used as different kinds of data (eg, in C you can characters as numbers (eg, their ASCII code) and perform arithmetic on them, also in C you can use a data structure called a union in order to use that data as any number of different data types). Admittedly, this very important point has no direct bearing on the subject of this page, but it is an important computer science point anyway which I've seen confuse many beginning programmers who haven't yet realized that internally it's all just nothing but bits.

Depending on the CPU's architecture, certain registers can be given special purposes as well as special operations. One of the purposes important to our discussion is the ability to shift left (ie, multiply by two) or right (ie, divide by two). Some CPUs will have general purpose registers capable of shifting, whereas others would need to do something like using the ALU to do that. Basically, every CPU architecture is unique, so compilers for those architectures will need to be written to take that into account in order to get the same results at the higher level. Almost all of that ends up being transparent to all users and programmers.

In some architectures, two registers can be linked together to logically form a single register twice the normal length. For example, multiplying two registers of standard bit-length will result in a product that could be at least twice the bit-length of either the multiplicand or the multiplier. In such cases for the multiplication algorithm (see immediately below), the "register" allocated to contain the product could instead be a pair of registers containing the higher- and lower-order bits of the product. In some architectures, the instruction set can take that into account, whereas in others you have to take care of that yourself. Same outcome, but different means of getting there.

Also, some registers have special functions and some functions have more functionality than others. For example, one, the instruction pointer, always contains the memory address of the next instruction to be executed. The Memory Address Register is loaded with the memory address to be accessed. Some more complex instructions require certain registers to be loaded with the parameters that it needs (eg, a counter to control a loop). And there is one very special general purpose register, the accumulator (almost always Register A), which supports all possible arithmetic and logical operations, including shifting. We will use the accumulator below.

• Every CPU supports an instruction set, a set of individual operations that the CPU can perform. In my technician training with the COM-TRAN TEN trainer (used both by the US Air Force and US Navy around the late 70's), we "chased sparks" through the logic diagrams representing the hardware in those computers, seeing how every individual instruction was implemented in that computer. Those instructions include (but are not limited to) moving data from register to register, from register to memory, from memory to register, from register to I/O device (which contains its own registers), from I/O device to register, from memory to I/O device, from memory location to memory location, etc. Those instructions also include arithmetic and logic operations on a single register or between any combination of registers and memory locations (possible permutations depending on individual instruction sets) -- that includes setting to zero, AND'ing, OR'ing, XOR'ing, shifting left or right, incrementing, decrementing, negating, adding, subtracting, multiplying, dividing (older, less powerful processors wouldn't have multiply or divide instructions but rather that had to be emulated in software). Those instructions also include performing a number of tests on status or individual bits in a register and jumping to another location in the program depending on that test.

And so on, all depending on the instruction set and CPU architecture, which varies from one processor to the next, though most processor manufacturers have created "processor families" and try to maintain some degree of compatibility between different processors within the same family. When you have an Apple app (on a Motorola processor, though that knowledge could be outdated), trying to load and run it on a Windows box (Intel, an entirely different processor) could not possibly work because those two processor families have entirely different instruction sets -- did I forget to mention that every instruction has a binary numeric op-code value which varies between processors so the first major difference between processors would be that they assign entirely different numeric values to their op-codes? Then pile on top of all that the necessary differences between operating systems (eg, a Linux executable will not run on Windows even though they usually use the same Intel processor family because the file formats are different), but the point is that the choice of CPUs is the first barrier.

• All computer programs that have ever been written have been translated down to a sequence of CPU instructions. The translation of extremely high-level programs down to a sequence of individual CPU instructions can be mind-bogglingly complex, but it does still happen none-the-less.

So from the perspective of CPU instructions and registers, here is the algorithm for performing integer multiplication:

1. Load two shift registers with the multiplier and multiplicand and zero out a third register, an accumulator, which will end up containing the product.
2. If the multiplier is zero, exit. The product is in the accumulator.
3. Test the least significant bit (LSB) of the multiplier. If the LSB is a 1, then add the multiplicand to the accumulator.
4. Shift the multiplicand left by 1 (double it) and shift the multiplier right by 1 (halve it).
5. Goto Step 2.

Did I forget to define even and odd numbers? Algebraically, an even number is 2n while an odd number is 2n+1. In a binary number, if the least significant bit (LSB) is a zero (ie, a zero in the 1's place), then the number is even, but if the LSB is a one (ie, a one in the 1's place) then the number is odd. The LSB is the bit that we're multiplying the shifted multiplicand by, so if the LSB is a one then we multiply by 1 and add the result to the accumulator, but if the LSB is zero then we add zero to the accumulator.

Now let's play computer! This is what I have done so very many times for the four decades of my computer engineering training and career.

First, let's assign values to registers. If we were doing this in a higher-level language, these would be variables -- for the last few decades of my career, I mostly worked in C, which occupies a gray zone between higher- and lower-level languages, a comfortable environment for a traditional programmer.

Let's set up three registers/variables: the multiplier, the multiplicand, and the accumulator as referred to in the algorithm specified directly above. The accumulator (to which the shifted multiplicand will be added as per the algorithm and which will end up containing the product). We will load the multiplier value into the multiplier, the multiplicand into the multiplicand, and zero into the accumulator. Also note that, if registers are used, the multiplier and multiplicand registers must support shifting.

Our example will be one of Lehrer Schmidt's examples: 5 × 25 → 1012 × 110012:

```    5 × 25 ⟶ 1012 × 110012

multiplier     multiplicand    accumulator
0101             11001               0        Load the registers.
0101             11001           11001        First test.  LSB is 1 (ie, line is odd) so add multiplicand to the accumulator.
0010            110010           11001        Shift and test.  LSB is 0 (ie, line is even) so add nothing to the accumulator.
0001           1100100         1111101        Shift and test.  LSB is 1 (ie, line is odd) so add shifted multiplicand to the accumulator.
0000          11001000         1111101        Shift and test.  Multiplier is zero, so exit. Product is in the accumulator.

11111012 = 12510
```

Here's another example, the one we did binary long multiplication one above, 12 × 31:

```    12 × 31 ⟶ 11002 × 111112

multiplier     multiplicand    accumulator
1100             11111               0        Initialize the registers.
1100             11111               0        First test.  LSB is 0 (ie, line is even) so add nothing to the accumulator.
0110            111110               0        Shift and test.  LSB is 0 (ie, line is even) so add nothing to the accumulator.
0011           1111100         1111100        Shift and test.  LSB is 1 (ie, line is odd) so add shifted multiplicand to the accumulator
0001          11111000       101110100        Shift and test.  LSB is 1 (ie, line is odd) so add shifted multiplicand to the accumulator
0000          11111000       101110100        Shift and test.  Multiplier is zero, so exit. Product is in the accumulator.

1011101002 = 37210
```

That is what Lehrer Schmidt is doing in his video, except he keeps the values in their decimal form instead going straight to binary. Decimal numbers still retain their binary properties; you just cannot see them.

### The Multiplication Algorithm in C

Here's a simple implementation of the binary multiplication algorithm in a higher-level language, C. Note that I have not tested it for negative numbers (which would be represented with two's complement, but I've taxed your brain too much already).

```    long multiply(short multiplicand, short multiplier)
{
long acc = 0L;           // declare accumulator and initialize it to zero

// test that multiplicand is non-zero
//   if multiplicand is zero, then the product will be zero
if (multiplicand)
{
while (multiplier)       // loop while multiplier is not zero
{                        // in all tests, zero == false, non-zero == true
if (multiplier & 1)  // perform AND to test LSB.
{                    //  if LSB is 1, then multiplier is odd
acc += multiplicand;    // add multiplicand to the accumulator
}
// Note: all values are actually in binary, so shifting is also in binary
multiplicand <<= 1;   // double multiplicand; shift left by one
multiplier >>= 1;     // halve multiplier; shift right by one
}
}

return acc;
}
```

This code is provided solely for educational purposes and should not be used in commercial products. Besides, C fully supports multiplication and division. This kind of code would only be needed for emulating multiplication in assembly programs for processors that do not have multiply instructions, but in that case you should be using a third-party math emulation package.