Clone wiki

Challenge11 / gdlog

GDLog Usage Notes

This is a short set of notes on the use of Oleg Babul's gdlog - they exist mainly so that I don't forget things!

For simplicity, we'll assume that we are given a prime p, a generator g of ZZp and a value t in ZZp for which we wish to calculate an x such that t = gx mod p.


  • in all time based outputs to gdlog, et is the estimated time to completion and, unless post-fixed with h, all timings are displayed in seconds
  • gdlog appears to require both its sieving polynomials to have 1 < degree <= 3 (which might limit its ability to be used to post-process relations collected by other NFS implementations?)
  • for the challenge:
    • average of 39 cores used over a 6 week period
    • phase 1 produced approximately 12 million relations within 3 weeks
    • phase 3 produced (after the challenge's submission deadline) the incorrect result of 10 607833 713671 622331 958593 898979 290746 974713 429220 998127 451286 349297 232203 445447 007724 535020 385315 291415 382670 421464, for t = 57 862149 747092 325030 167062 717207 431168 694932 216310 538267 343292 342318 477158 898286 092297 346029 386203 435452 141910 116025.
  • following the challenge, it appears that I have introduced a copying error in moving code from my commodity machines to the Huddersfield HPC. This mistake messed up the sieved relations with ensuing chaos! A fresh rerun of gdlog (on 16 cores) produced the following results:
    • phase 1 produced approximately 3.2 millions relations within 1 week (see ??)
    • phase 2 produced a log base within 3 weeks (see ??)
    • phase 3 required ? weeks and ? solution attempts in order to produce a correct answer of ??.

Phase 1: Sieving

This stage of the algorithm is embarrassingly parallel and aims at building a sufficiently large collection of relations and ideals.

In order to do this, a pair of polynomials are first selected. The choice of these polynomials is critical to the runtime of this algorithm. The reader is referred to:

for further information on these issues.

For discrete logarithms in large prime fields (that are safe), you'll almost certainly need to manually choose your own polynomials, as the default choices can lead to very long run times. Here, we'll assume that:

  • p = 2q + 1, for q a prime
  • f0 and f1 are our two polynomials over ZZ[X]
  • and that m in ZZp is such that f0(m) = 0 mod p and f1(m) = 0 mod p.

With a gdlog job file setup to (for example) contain (here: lc0Fact is the factorisation of the most significant coefficient of the f0 polynomial; and p1Fact is the factorisation of p-1):

p: 64871953055083132442586738160300545132676055315639670194672701161688976212780255925431194294264487036940900075456699
q: 32435976527541566221293369080150272566338027657819835097336350580844488106390127962715597147132243518470450037728349
g: 28874886502820635270988382873765249080724636489175670968600351015660554179871058627909746857890394896866174667024027
f0:[ -350341802863515235428874094709894204855  17716971180757166540477367604139645968  441472165546405738919812033359164963440 ]
f1:[ 64  -99  -30  1 ]
m: 39954570559566723571719449282650117197915782766741932890919455166435188457869286321758554960521414441796342496308655
lc0Fact: [2 2 2 2 5 7 9533 7866136993862153 10512940372551401] 
p1Fact: [2 32435976527541566221293369080150272566338027657819835097336350580844488106390127962715597147132243518470450037728349]
job: gdlog

we first need to generate a factor base for the interval of numbers over which we wish to sieve out relations and ideals. We do this via (this is a single threaded task):

def gen_factor_base(start, cpus):
  end = start + cpus*1000000
  p = subprocess.Popen("gdlg_sieve -q0" + start + " -q1" + end + " -fb -of gdlog.fb gdlog")
  return p.wait()

Next, we perform lattice sieving to generate a collection of relation files. The following python function achieves this (here, this is a multi-process task):

def lattice_sieve(start, cpus):
  sieveprocs = []
  for cpu in range(0, cpus):
    start_slice = start + cpu*1000000
    end_slice = start + cpu*1000000
    p = subprocess.Popen("gdlg_sieve -tlattice -q0" + start_slice + " -q1" + end_slice + " -j gdlog.f -of gdlog_" + start_slice + "_" + end_slice + ".rel -if gdlog.fb gdlog")
  for cpu in range(0, cpus):
    if sieveprocs[i].wait() != 0:            

You should now have a large collection of *.rel files. Please note that, with large integer intervals, this number of files can exceed the number of files allowed as arguments on the bash command line or similar (you can deal with this by cat'ing the results together!).

It is essential to ensure that this set of files are properly formatted, or else time will be lost in performing the calculation (latter stages of the algorithm have not been coded as defensively as one might like!). Each line of a *.rel file should match the following EBNF:

<int> <int>; [<int>]+; [<int>]+;

The first two numbers are the numbers a and b (these specify a relation), and are such that gcd(a, b) = 1. The remaining number sequences specify the prime factorisations for two ideals I0 and I1. These numbers are such that:

  • bd(f0)*f0(a/b) = I0 mod p
  • and bd(f1)*f1(a/b) = I1 mod p

where d(fi) is the degree of the polynomial fi (i = 0 or 1).

By exploiting computing clusters (e.g. using 70 odd cores), one can generate sufficiently large sets of relations within about 1-2 weeks here.

Phase 2: Building the Logbase

This phase has parts that are multi-threaded. For computing clusters, the gdlog code really needs to be rewritten to take advantage of openmpi or similar. However, we found that by using the existing code on 48 core clusters, our speeds were tolerable (i.e 1-2 weeks).

Should errors be thrown at this stage (e.g. Incorrect norm factorization), you'll need to stop the algorithm and deal with them (e.g. by locating the offending entry in gdlog.f and deleting it).

Each line of the fact.bak file (generated by this phase), has the following EBNF format:

<number of entries> (<number> <number>)+ <max int> 0 (<number> <number>)+ <max int> 0

here, <max int> is typically the value 2147483647 and each entry is a pair of integers.

It is essential that each line of fact.bak be of the correct length and contain two <max int> record separators. Failure to ensure this means that the subsequent parsing of this file will generate incorrect results (which are typically reported during the parsing of fsys.bak!).

Each line of fsys.bak file is such that:

  • the first line is the value m (see phase 1)
  • the second line has the number of relations followed by the distinct number of ideals present within the file
  • and all remaining lines match the EBNF <number of entries> (<number> <number>)+ - these lines are generated from correspondingly indexed lines in fact.bak.

The following python code is used to generate these files and the resulting logbase:

def logbase(cpus):
  p = subprocess.Popen("gdlg_sys -d3 -useBackup -useGauss -th" + cpus + " -if gdlog.f -of gdlog.logbase gdlog")
  return p.wait()

The generation of the wnSeq.bak file (produced by the Block Wiedemann algorithm) can be quite lengthy (on computing clusters, our jobs had to operate in 48 hour windows). Fortunately, the code (generating wnSeq.bak) is robust enough to correctly restart following a job termination. The generation of this file, and the checking of its solution, appear to be the computational bottlenecks in phase 2?

Phase 3: Calculating a Discrete Logarithm

Having generated our logbase, we may at long last attempt to perform a discrete logarithm calculation (this stage of the algorithm appears to be single-threaded and only took at most 2 days for our challenge scenario). If we have not generated a sufficiently large enough collection of relations in phase 1, then this phase may well fail, forcing us to generate more relations!

Before the logarithm is actually calculated, a collection of virtual logarithms are first validated - whilst this check may be skipped, it doesn't take too long.

It appears that this phase of the algorithm often terminates with an incorrect result. To cope with this, we run this phase multiple times (with the number we wish to calculate a discrete logarithm for multiplied by a known skew value).

The following python code will hopefully(?) deliver your results (this code can obviously better exploit parallelism within multiple_dlog!):

def dlog(t):
  p = subprocess.Popen("gdlg_dlog -t" + t + " -if gdlog.logbase -of" + t + " gdlog")
  return p.wait()

def multiple_dlog(t, g, p, n):
  for i in range(0, n):
    k = randint(0, p)
    alt_t = (power_mod(g, k, p) * t) % p
    print "Skew value is g ^ " + k + " (subtract this number from a *valid* result in order to obtain log_g(t))"
    return dlog(alt_t)