This is a yet unnamed usenet articles archiver.

Given a newsfeed (from a suck-like tool), it lists all the articles, sort them
by thread and date, and then produce an HTML web-site.

Usage example (shell commands):

    # Create the articles repository and the thread pages directory
    mkdir article thread

    # Get articles from an NNTP server
    echo 'my.newsgroup 0' > sucknewsrc
    suck news.server.domain -c -n -H -K -bi incoming_news

    # Import the suck articles, build a threads index and output the web-site
    (cd article ; ../ < incoming_news
    ./ article > threads.pickle
    ./build_site article < threads.pickle
    # If everything went well, "index.html" points to "thread/*" thread pages.