"I choose to code it in BASH.... I choose to code it in BASH and other things in one week, not because it is easy, but because it is HARD!!!"


httpd2.bash is a simple, configurable web server written in bash.

It's a somewhat less minimalist (and somewhat more insecure) implementation than httpd.bash, aiming to mimic a full-blown HTTP server only on the essential features.

A rudimentary IP Black List tool used to be bundled, but it was moved into its own project.


  1. bash, any recent version should work
  2. ncat, socat or netcat to handle the underlying sockets.
  3. MiniUpNp client to punch holes on your NAT router.
  4. A (not so) healthy dose of insanity :-)

Getting started

  1. Running httpd.bash for the first time will generate a default configuration file, httpd.conf

  2. Review httpd.conf and configure it as you want.

  3. Optionally, edit the before-the-last line of the script to replace ncat with your favorite networking utility. Possible use cases are (note to myself - review these statements):

    1. socat TCP4-LISTEN:${1:-3000} MAX-CHILDREN:1 EXEC:/bin/bash -c run
      • Serve ONE client at time
      • No concurrency
      • No multiple fetches from the http client
      • Can be DDoS'd, but the server should withstand the abuse without crashing.
    2. mkfifo /tmp/httpd.bash.pipe && nc -l -p ${1:-3000} 0</tmp/httpd.bash.pipe | /bin/bash -c run &> /tmp/httpd.bash.pipe
      • Serve ONE file for a client
      • No concurrency
      • No multiple fetches from the http client
      • Some netcat implementations have the "-e" option (exec), what simplifies the command line. See the ncat example.
    3. socat TCP4-LISTEN:${1:-3000},fork EXEC:/bin/bash -c run
      • Serve MANY files for many clients
      • Concurrency
      • Multiple fetches from the http client
      • Vulnerable to process bombing
      • Can be DDoS'd, but the server should withstand some abuse before crashing.
    4. ncat -v -lk -p ${1:-3000} -e '/bin/bash -c run'
      • Serve MANY files for many clients
      • Concurrency
      • Multiple fetches from the http client
      • Can be DDoS'd, but the server should withstand some abuse before crashing.
  4. Run httpd.bash.

  5. Optionally (and not on a production machine), soft-link to your daily crontab.
    1. sudo iptables -L | less is a nice tool to audit the iptables rules.
    2. cat /var/log/ipban.log to rapidly audit the hosts (by name, when possible) are being black listed at the moment


  1. Serves text, HTML and image files
  2. Shows directory listings
  3. Allows for configuration based on the client-specified URI
  4. Works behind NAT
  5. A rudimentary automated block list (renewed daily)


  1. Does not support authentication
  2. Doesn't strictly adhere to the HTTP spec.
  3. Only GET is supported
  4. No virtual host support
  5. There's no Range support, the server states that using Accepted-Ranges: none when responding binary requests.
  6. The tool can kick you out from your own site, if by some unlucky reason lists your IP as "Interesting".
  7. No logging facilities
    • ./httpd.bash 2>&1| tee -i >(gzip -c -9 > /var/log/www/log.gz) solves that - but only works on the commandline (don't ask, still figuring it out)


  1. Only rudimentary input handling. One would not runn this on a public machine - unless they are nuts like me. :-)
  2. A crontab job, ipban.crontab.bash are provided to mitigate the security issues.
    1. Fetching data from, it builds a daily block list to feed iptables.
    2. IPs known to attack over 100 times in known history (of the site! =P ) or in the last month are blocked.
    3. Data is renewed daily
    4. DO NOT USE on a production machine. The reports are catching hits from crawlers from Google and others (I need to figure out why they are consistently behaving on a way that the Whistle Blower detects as a interesting request).

HTTP protocol support

  • 404: Returned when a directory or file doesn't exists
  • 403: Returned when a directory is not listable, or a file is not readable
  • 400: Returned when the first word of the first line is not GET
  • 200: Returned with valid content
    • Content-type: http2.bash uses /usr/bin/file to determine the MIME type to sent to the browser
  • 1.0: The server doesn't support Host: headers or other HTTP/1.1 features - it barely supports HTTP/1.0!

Help is needed (see below). On the bugs that are my fault :-) , patches and pull requests are hightly appreciated. Please check if the problem happens also on origin - if yes, submit the fix there and I will merge downstream.


  • Reverse proxying
  • Cache
  • Better filtering on data.
    • White lists, to prevent you from being kicked out from your own server by accident. ;-)

Known Issues

Yeah, that part of the document that everybody hates to write - and hates more to read. =/

  • The output stream was being closed imediatelly after the bash serving session is closed, and some bigger files are truncated in the process. A hack was made (EXIT function) in an attempt to overcome the problem, but a real fix is still RiP. (Research In Progress)
    • Interesting enough, while using a Desktop browser to make the resquests, the RPi used to answer nicely, the problem only happened when the requesting side was an Android and iOS Browser
    • Yet more interesting, debugging the problem on a VPS, the problem started to happen too on 99% of the requests! (one or another got through).
  • Some browsers don't honor the Accepted-Ranges: none header and keep trying to restart a aborted fetch. The server will just respond with the full file instead.
  • On some legit crawlers (most of them from Google) are being listed.
    • This is due the bots getting lots and lots of Page Not Founds to the point of the hit count classify them as a "Interesting Issue".
    • Not sure how to fix that (assuming I want to fix that).
  • When the URL pinpoints to a directory (instead of pointing to a index.html), the relative pathnames on the link tags are calculated using the directory's father, not itself.
    • It's a browser issue!!! ��
    • Confirmed on:
      • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36
      • Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:50.0) Gecko/20100101 Firefox/50.0
    • Safari does it right.
  • Query Strings are parsed but then just discarted.
    • Done because Facebook started to add a "fref" query string everywhere when someone clicks on a link there.

Test Beds - A Raspberry Pi server. Up ocasionally (I try to keep it up most of the time), draw me a letter if you want to see it and it is down. - A test bed on a VPS. Up under request. Ask very nicely. :-) - The content being served by NGINX (for troubleshooting).


"If anyone installs that anywhere, they might meet a gruesome end with a rusty fork" --- BasHTTPd (this fork origin) creator, maintainer

"Excellent tool for making raspberry juice!" --- httpd2.bash (this one) creator, maintainer

Known Clients

Forked from