Now, on to the actual algorithm.
Say, for example, that you have a big number n and you want to know the factors of n. Let's use 16843009. And, say, for example, that we know that n is a not a prime number. In this case, I know it isn't because I multiplied two prime numbers together to make n. (For the crypto weenies out there, you know that there a lot of numbers lying around which were made by multiplying two prime numbers together. And, you probably wouldn't mind finding the factors of some of them.) In cases where you don't know, a priori, that the number is composite, there are a variety of methods to test for compositeness.
Let's assume that n has a factor d. Since we know n is composite, we know that there must be one. We just don't know what its value happens to be. But, there are some things that we do know about d. First of all, d is smaller than n. In fact, there are at least some d which are no bigger than the square root of n.
So, how does this help? If you start picking numbers at random (keeping your numbers greater or equal to zero and strictly less than n), then the only time you will get a = b modulo n is when a and b are identical. However, since d is smaller than n, there is a good chance that a = b modulo d sometimes when a and b are not identical.
Well, if a = b (mod d), that means that (a-b) is a multiple of d. Since n is also a multiple of d, the greatest common divisor of (a-b) and n is a positive, integer multiple of d. So, now, we can divide n by whatever this greatest common divisor turned out to be. In doing so, we have broken down n into two factors. If we suspect that the factors may be composite, we can continue trying to break them down further by doing the algorithm again on each half.
The amazing thing here is that through all of this, we just knew there had to be some divisor of n. We were able to use properies of that divisor to our advantage before we even knew what the divisor was!
This is at the heart of Pollard's rho method. Pick a random number a. Pick another random number b. See if the greatest common divisor of (a-b) and n is greater than one. If not, pick another random number c. Now, check the greatest common divisor of (c-b) and n. If that is not greater than one, check the greatest common divisor of (c-a) and n. If that doesn't work, pick another random number d. Check (d-c), (d-b), and (d-a). Continue in this way until you find a factor.
As you can see from the above paragraph, this could get quite cumbersome quite quickly. By the k-th iteration, you will have to do (k-1) greatest common divisor checks. Fortunately, there is way around that. By structuring the way in which you pick “random” numbers, you can avoid this buildup.
Let's say we have some polynomial f(x) that we can use to pick “random” numbers. Because we're only concerned with numbers from zero up to (but not including) n, we will take all of the values of f(x) modulo n. We start with some x1. We then choose to pick our random numbers by xk+1 = f(xk).
Now, say for example we get to some point k where xk = xj modulo d with k < j. Then, because of the way that modulo arithmetic works, f(xk) will be congruent to f(xj) modulo d. So, once we hit upon xk and xj, then each element in the sequence starting with xk will be congruent modulo d to the corresponding element in the sequence starting at xj. Thus, once the sequence gets to xk it has looped back upon itself to match up with xj (when considering them modulo d).
This looping is what gives the Rho method its name. If you go back through (once you determine d) and look at the sequence of random numbers that you used (looking at them modulo d), you will see that they start off just going along by themselves for a bit. Then, they start to come back upon themselves. They don't typically loop the whole way back to the first number of your sequence. So, they have a bit of a tail and a loop---just like the greek letter “rho”.
Before we see why that looping helps, we will first speak to why it has to happen. When we consider a number modulo d, we are only considering the numbers greater than or equal to zero and strictly less than d. This is a very finite set of numbers. Your random sequence cannot possibly go on for more than d numbers without having some number repeat modulo d. And, if the function f(x) is well-chosen, you can probably loop back a great deal sooner.
The looping helps because it means that we can get away without piling up the number of greatest common divisor steps we need to perform. In fact, it makes it so that we only need to do one greatest common divisor check for every second random number that we pick.
Now, why is that? Let's assume that the loop is of length t and starts at the j-th random number. Say that we are on the k-th element of our random sequence. Furthermore, say that k is greater than or equal to j and t divides k. Because k is greater than j we know it is inside the looping part of the Rho. We also know that if t divides k, then t also divides 2*k. What this means is that x2*k and xk will be congruent modulo d because they correspond to the same point on the loop. Because they are congruent modulo d, their difference is a multiple of d. So, if we check the greatest common divisor of (xk-xk/2) with n every time we get to an even k, we will find some factor of n without having to do k-1 greatest common divisor calculations every time we come up with a new random number. Instead, we only have to do one greatest common divisor calculation for every second random number.
The only open question is what to use for a polynomial f(x) to get some random numbers which don't have too many choices modulo d. Since we don't usually know much about d, we really can't tailor the polynomial too much. A typical choice of polynomial is
f(x) = x2 + a
where a is some constant which isn't congruent to 0 or -2 modulo n. If you don't place those restrictions on a, then you will end up degenerating into the sequence { 1, 1, 1, 1, ... } or { -1, -1, -1, -1, ... } as soon as you hit upon some x which is congruent to either 1 or -1 modulo n.