Introduction
Let's find a primitive polynomial of degree 4 modulo 2
We can also do higher moduli.
You can also test a given polynomial for primitivity. Indicate the modulus after the comma.
Limits on the Size of the Prime p
The prime modulus p does have an upper bound, although a large one. p < 2m/2 - 1 where m = number of bits in the computer's integer type (defined in Primpoly.h). On most machines this will be 64 bits: p < 2147483648.
A Word about the Self-Check
We print out a self-check file in the same directory as Primpoly.exe called unitTest.log If the program fails, please send it to me to help diagnose any bugs.
Additional Program Settings
Setting the print statistics flag -s on the command line shows the progress of the program while it is finding a primitive polynomial.
If the inputs are incorrect, the program will tell you, and give you a big lecture.
Setting check flag -c on the command line runs an independent check on the primitive polynomial after it is found to confirm it is primitive. Default is off, because this test can take practically forever for large values of pn: using the definition of primitive polynomial, we simply check xk(mod f(x), p) is not equal to 1 for 1 ≤ k ≤ pn-2, but that xk(mod f(x), p) = 1 for k = pn - 1. This flag is useful for debugging purposes.
To find and list all primitive polynomials, use the -a flag.
Multiple Precision Integer Arithmetic C++ version
The object oriented C++ version has no limit to on the degree of the polynomial, except system memory, though it will slow down for very high degree.
The size of p is limited by the size of the base we use for multiple precision arithmetic which in turn depends on the native integer word size. If you run the program with a value of p larger than we can handle, you get an error message:
The largest integer which needs to be computed anywhere in the program is pn. Variables in the program which require extra precision are defined to be of BigInt type. Our multiple precision integer arithmetic is for unsigned numbers only, and is taken from Knuth's book:
D. E. Knuth, THE ART OF COMPUTER PROGRAMMING, VOL. 2, 3rd ed., Addison-Wesley, 1981, pgs. 250-265. Errata for Volume 2:
Given p and n, we can find the maximum number of digits m and base b.
First, the base b. Let N = number of bits in an integer type which we will use for our digits. The maximum unsigned integer is 2N-1. The largest number which is computed in the multiple precision integer arithmetic algorithms is b2. So if we let b = 2N/2 - 1, we'll guarantee b2 < 2N-1. and b will be a power of 2, an N/2 digit binary number, which is needed to efficiently implement shifting and masking operations.
The maximum integer which needs to be computed is pn = bm - 1. So pn + 1 = bm. A bound is then pn < bm, or n lg p < m lg b or (n lg p) / (lg b) < m or m = ⌈ (n lg p) / lg b ⌉
Lower Bounds for pn
They are p ≥ 2, n ≥ 2.
Limit on the Sizes of p and n: C Version
Since we use the system hardware integer arithmetic, there are internal limits within program which restrict the size of the largest primitive polynomial which may be computed. We discuss bounds on various internal integer variables below. Most bounds are set in the source file pp.h and tested in pp.c.
Upper Bound on Size of Internal Integer Computations
The largest integer which needs to be computed anywhere in the program is pn. Variables in the program which require this extra precision are defined to be of Bigint type.
This should be an unsigned integer type, since all computations with Bigint involve multiplication, division, modulo, masking and decrement by 1, so we don't use negative numbers. The alternative is to use a signed integer type, but reduce the number of bits used by 1, to avoid overflowing into the sign bit.
For the Windows/Intel platform, the largest integer type in the Microsoft Visual C++ 6.0 SP3 compiler is the signed 64-bit __int64. For a Mac OS X, PC/cygwin or other Unix platform which uses the GNU C compiler 4.0, we use the 64-bit long long type. We play it safe and assume this is also a signed type. Thus we need to set the maximum size of pn to 264-1-1. So, for example, when p = 2, the highest polynomial degree is n = 62.
A Bound for pn (MAXPTON)
The basic bounds are $p \le 2$, $n \ge 2$ and $p^n \le \text{MAXPTON}$. All other bounds are derived from these.
A Bound for r
$r = \frac{ p^n - 1 }{p - 1}$ so the upper bound is $r \le \frac{ p^n - 1 }{2 - 1} = p^n - 1 \le p^n = \text{MAXPTON}$ And the lower bound is $r = \sum_{k=0}^{n-1} p^k \ge p + 1 \ge 3$
A Bound for the Number of Prime Factors (MAXNUMPRIMEFACTORS)
This is used for sizing the array containing the prime factors of r and their multiplicities. $r = p_1^{e_1} ... p_k^{e_k} \ge 2 ... 2 = 2^k$ $r = p_1 ... p_k \ge 2 ... 2 = 2^k$
Since the $p_i \ge 2$ are primes. Since $r \ge 3$ from above, we must have at least one prime factor, and so $k \ge 1$.
Thus $2^k \le r$ or $k \le lg( r )$ where $lg$ is $log_2( r )$ Then $k \le lg( MAXPTON )$.
We need an integer value for the upper bound, so we play it safe and round up to MAXNUMPRIMEFACTORS = 1 + floor( lg( MAXPTON ) ).
This is also used for the array containing the prime factors of p-1. As above, the maximum number of prime factors of $p-1 \lt \text{max. num. prime factors of p} \lt$ max. num. prime factors of $p^n$ So the upper bound above should work fine.
A Bound for n. (MAXDEGPOLY)
$2^n \le p^n \le MAXPTON$ so $n \le \text{ lg MAXPTON}$. Again, to play it safe, we chop this bound to MAXDEGPOLY = floor( lg MAXPTON ). The bound is exact when p = 2.
Number of trials for probabilistic primality testing. (NUM_PRIME_TEST_TRIALS)
We error check the input parameter p, to see if it is prime. Our probabilistic primality test has the failure probabilities, P( test says p is not prime | p is prime ) $\le {\frac{1}{4}}^{\text{NUM_PRIME_TEST_TRIALS}}$
The default is NUM_PRIME_TEST_TRIALS = 25. The test always succeeds in detecting if p is composite (non-prime). This depends somewhat on the quality of the system random number generator, rand().