1. Nick Daly
  2. PlugServer

Overview

Configuring A Plug-Server

The FreedomBox project is awesome. Unfortunately, it's not finished yet. So, I figured, while it's in development, I'll just set up my own plug-computer as a pale imitation of what the FreedomBox will be, and change over to a real FreedomBox when it's released. Important features I'll miss include: zero configuration, networked and distributed file systems, secure communication with other FreedomBoxes, an integrated user interface, a VOIP server, a wireless mesh network, and so on. It will not provide the power and flexibility of the FreedomBox project (because it is not of the FreedomBox project), but it's a start. It fulfills the criteria that are most important to me in Eben Moglen's proposal.

This document will try to do two things: first, it will detail the steps I took to set up a personal plug-computer (and any wisdom I pick up along the way). Second, it will try to serve as a guide for the moderately experienced Linux user to also set up a plug-computer. If I do a good enough job documenting the steps I took to set up the system, anybody else should be able to do the same with, hopefully, fewer pitfalls along the way.

First, the limiting factor, the cost: You should be able to set this up for around $125 - $350 initial sunk cost (computer, SD card, and domain name registration), and then a $10 - $50 yearly, recurring cost (the price of a domain name registration, and power costs). Getting a certificate from a CA for allowing HTTPS/SSL encryption will run anywhere from $0 - $1,500, with annual or semi-annual renewal.

If you'd like to get in touch, just drop me an email at my gmail.com address or Jabber.org XMPP account, "Nick.M.Daly". If you'd like to keep it private, feel free to encrypt it to my public key (2011.0827 - 2014.0826):

GPG: D95C 3204 2EE5 4FFD B25E  C348 9F27 33F4 0928 D23A

Contents

Why?

This is an important question to answer, as without a good answer I'm just wasting my time. Eben Moglen and others make the case for hosting your own server incredibly well. The reasons basically boil down to the fact that running my own server allows me to guarantee my ownership and control of my data. My control and the network decentralization running a server encourages work together to increase my privacy and guarantee my Freedom of Speech. I don't have to share information I don't want to share and the cost for anybody to impair my privacy, have cause to torture me, be able to crunch me as a number, or spy on me massively increases, especially as more and more people create their own, federated, clouds.

The civil rights concerns aside, I've always wanted to run my own server anyway, it looked like a fun technical challenge, and it feels like it's time. Also, I'd really hate to cross the CFAA on somebody else's site for using a pseudonym. Finally, I've just stopped trusting (and using) Facebook and Google(+), but I'd still like to be part of this great (social) network known as The Internet, so I guess I don't have much of a choice.

If you get bored, go play Zork.

References

The Wall Street Journal's "What they Know" Series:
 http://online.wsj.com/public/page/what-they-know-digital-privacy.html
The LA Times's "Privacy" Series:
 http://latimesblogs.latimes.com/technology/privacy/

General Concepts

You're running an Internet accessible server. Never forget that. While you do so, that server is inherently vulnerable. So, start with a plan and know what you're doing.

A few good ideas:

Know your system

This is a plug-server. Thus, both RAM and drive-space are at a premium, but it'll stay on with minimal power-draw over long periods of time. I'll make sure to buy a server that can use SD-cards to expand the available hard-drives, so only RAM and processing power are the really limiting factors.

It'll be a remotely-administered, headless server. So, it'll need to run SSH, but we don't need to worry about the standard desktop environment. Emacs with TRAMP handles this situation beautifully. Just:

C-x f /ssh:(name)@(server):(directory)

Watch out, though, as Tramp will trigger SSH bruteforce protections. You might want to stop using it after you lock yourself out for a while, once or twice.

Minimize the Attack Surface

Every running service on your machine could be targeted. Whether or not it's currently vulnerable to known or unknown attacks is a different story. Because every running service is some amount of risk, you'll want to run as few services as necessary. "Necessary" here meaning: "enough to comfortably get the job done."

Low-Value Target

If your server isn't worth a second look, fewer people will. You want to make your server worth as little as possible to anyone who's not you (or your clients). Of course, it's a computer, so it's still capable of being a spam bot or DDOS zombie but it's a low capacity plug-server so it's less useful after it's owned by an attacker.

It's a special-purpose server, so it should be used as such. If you do your banking on it it'll be worth more to an attacker, and you'll be a lot more screwed afterwords.

Also, don't reuse user names or passwords between your primary system and your plug-computer or its services.

Least Privilege

Follow basic security practices, like using non-root users for daily use, or using separate user accounts for each task. Basically, if you're going to give some part of the system the ability to do a task, give it the ability to do only that task. This application of this rule is complex only in its granularity.

TODO:Move this to another file and split it into "services", "connection methods". Do the same for the setup script.

Setting Up

I want to setup the following services:

The following services are optional niceties:

These requirements contain twelve attackable surfaces (excluding the SSL certificate configuration), each with their own misconfigurable configuration. Yes, server-admining is an awesome nightmare.

I'll be using Debian Stable for this exercise, it's the most stable and secure distribution I know of. It has a good record with both exploit patching and stability (I've had many times more hardware failures in the last 5 years than software ones).

Now, we'll move on to actually preparing the plug server and its prerequisites.

Crosscutting Concerns

These are issues that appear multiple times, across services and across sections of the system configuration. I'll keep these in mind during the system setup and follow their guidance whenever possible.

  • Prefer TLS over SSL.
  • Research Debian's and the upstream's bugtrackers for exploitable holes before choosing between competing services.
  • Send activity logs and anomaly warnings to email accounts not hosted on the plug-server.

Get a Domain Name

Depending on your desired TLD, you have several options. Different hosting companies are capable of serving different TLDs. DynDNS, 1&1, or GoDaddy can serve the standard .com, .net, .org, .biz, etc., TLDs. They all have pretty good plans and aren't terribly expensive. You seem to get what you pay for, though at $30 or less per year it's not a terribly big deal either way. You'll probably have to buy SMTP forwarding (SmartHost services) separately, though.

However, there's another breed of TLD that the common domain name hosts don't serve: .bit domains. Those are reserved via the censorship-resistant Namecoin cryptographic (Bitcoin-based) system. Those currently run about $3.50 to register and have no recurring fee. Of course, those are only accessible via Namecoin capable addressing, but hopefully that'll become less of a big deal in the future.

A similar approach, set up in a slightly different way, is the OpenNIC project. This project uses alternate DNS servers to host alternate TLDs and is easier than Namecoin to set up. As a client, you merely need to add a few nameservers. As a server, you only need to register a domain, which is free. Like Namecoin, though, that name will be limited: it's available only to people who also use the OpenNIC nameservers.

If you don't have a publicly accessible IP address, you still have options. First, if you'd like to be accessible through the general internet, there's PageKite, which works absolutely beautifully. My ISP hosts the most hostile network I've ever seen (blocking ports, disrupting traffic, disabling port forwarding and DMZ functionality at the router [which the customer is not given the admin password for], etc., ad infinitum). However, PageKite flies quite happily through it all and can be set up in about 10 minutes.

Finally, if Namecoin doesn't cut it for you, Tor hidden services are always an option. You'll be accessible only through the Tor network, or via a proxy like Tor2Web. For an end-user, it's not much different than the NameCoin option, though the backend is significantly different. I don't cover THS here... yet.

Using more than one of these methods, to make your services accessible, can spell trouble if the service is domain-name specific (most are).

Namecoin

There are two basic options:

The registrar's a lot easier, but it's 5x more expensive, currently. Self registration currently costs ~11 NMC, which is about 0.35 BTC, or about 3.50 USD. The registrar comes in at 1.5 BTC, or 15 USD, though it requires no namecoins.

Interestingly, there are a couple conversions required to get from $ -> Namecoin: $ -> Dwolla -> Bitcoin -> Namecoin. A Dwolla.com account (with static $0.25 fee) is required to get funds to exchange for Bitcoins at a Bitcoin exchange.

Of course, that's kind of an absurd conversion. I considered taking the simpler route with BitMarket.eu (requires manual verification of other established e-commerce accounts) or Bitcoin-OTC, which is an even less centralized marketplace, tying Bitcoin wallets to PGP keys for name persistence with trading done over IRC. However I decided against this approach since breaking into the Bitcoin-OTC economy (it is a WOT, given how it's used), seems nearly as big a project as the server itself.

I've actually gone ahead on the Dwolla approach - the WOT seems impossible to break into, and Dwolla requires naught but time. It takes 1 - 3 days to verify your Dwolla account before you're allowed to transfer funds into it. From then, it takes 1 - 7 days to complete the transfer. Once you Dwolla account is funded Bitcoin purchase is instantaneous, through an exchange, while it takes the network a couple hours to verify the transaction. Finally, it'll take a few hours more to complete another transaction, converting the Bitcoins into Namecoins.

Once Dwolla actually receives and posts your funds, the process is pretty quick. It took them over a week to actually receive the $20 I sent them, but it was minutes before that was sent to Mt Gox and converted to Bitcoins. From there, I sent the coins from Mt Gox to my Bitcoin account. Then, I opened up an account at the Namecoin Exchange and sent that account enough coinage to buy one domain name. Keep in mind, the prices on the exchange are the price per coin, not the price for coins. I expected to get a great deal from the guy who was selling 9,000 NMC for 0.03 BTC, but I was sorely mistaken.

From there, I configured my ~/.namecoin/bitcoin.conf file and then started the namecoin daemon:

rpcuser="yourName"
rpcpassword="yourPassword"
daemon=1

Once namecoind getinfo showed me the current block number, I exchanged Bitcoins for about 10 Namecoins and sent them to my Namecoin account. Don't forget the "Approve Transaction" button hiding at the very bottom of the withdraw page. They didn't send me my coins for a dozen hours because I didn't see (or click) that button.

Now, note the current block number. You're finally able to register a name:

$ namecoind name_new d/example
[
    "(very very very looooooooooong value)",
    "(short value)"
]

Wait 12 blocks. This waiting is interminable! At the current rate, it took about 2 days for a dozen blocks. Of course, it's got nothing on the waiting after the next command that actually activates the domain name:

$ namecoind name_firstupdate d/example "${shortValue}" "${longValue}" '{"map": {"": "${yourIpAddress}"}}'

To maintain your domain name going forward, the domain name needs to be re-updated before the specified number of blocks passes and the name expires. Names expire after 36,000 blocks (or 12,000 blocks if they're registered before block 24,000):

$ namecoind name_update d/example '{"map": {"": "${yourIpAddress}"}}'

Amazingly, after giving it a night to propagate, it works!

$ namecoind name_list example

Multiple Domain Names

To function correctly, most web-services need keep an internal record of their domain name. This makes it difficult to make the same services accessible through multiple domains. Most software just isn't designed for that; making the same services accessible through multiple domain names is difficult.

You're probably best off just picking one domain and sticking with it, or configuring multiple copies of the same service. Of the services I've configured here, some work better over multiple domains than others:

Apache:No changes necessary if you're serving through subfolders (example.com/service). You'll need to add ServerAlias lines to your config files if you're serving on subdomains (service.example.com/).
Trac:No changes necessary.
SSH:No changes necessary.
Tor:No changes necessary.

Some don't work so well or need lots of customizing:

Wordpress:You'll run your blog as a network (Wordpress 3.0 and later).
BitTorrent Tracker:
 You'll need to list multiple domains in the torrent's Announce field so clients know where to find your server. btmakemetafile doesn't seem to support that capability. The tracker you host doesn't care what the server is called.
StatusNet:You'll need to run one StatusNet installation per domain, but they can share the same database. See config.php for details.

Unknown:

  • IMAP
  • SMTP
  • IM
  • SSL

The only service that really seems to work is Trac (the code hosting site), which appears to use relative links for everything, like any good website should.

These problems also apply to Tor's hidden services: you have a different domain there, too.

TODO:Hack the tracker service to allow multiple domain names? I'd be duplicating Apache's VirtualHost functionality. That might be overkill for something the system isn't necessarily built to support.

Get a Plug Computer

Do this via plugcomputer.org or whatever other plug computer supplier you like. Will run you around $100-250. I'll drop $150 on the DreamPlug. The D2Plug (the DreamPlug's successor) seems a bit out of my price-range and feature set. I don't really need this thing for its video-acceleration capabilities.

Installing Debian

If you're luckier than I, and have access to a physical plug server (and aren't doing all this setup via VM), you'll want to install Debian to an SD card, making the following decisions during setup:

  • Setup partitions. You can make the partitions either:

    Keep in mind though, both options will make it harder to change partitions later. I've set up several partitions on my 16 GB SD-card which will be important when I'm securing the system later:

    Directory

    Size

    /

    2.0 G

    /var/lib/

    512 M

    /usr/sbin/

    512 M

    /home

    13.0 G

  • Make sure you get your hostname right.

  • Select the following tasks during task selection:

By the time you're done, there should be some 200 packages on your system. Not very bulky so far, this is good.

Automagic Setup

At this point you, dear reader, have two options:

  1. Set this up yourself, following these instructions (recommended).

  2. Run the automatic setup script to set things up exactly as I have them here:

    # apt-get install mercurial
    
    $ hg clone https://bitbucket.org/nickdaly/plugserver
    $ cd plugserver/setup
    $ bash setup.sh
    

With the automatic setup, each of the respective services is hosted from the following locations:

Blog:example.com/blog
Project:example.com/code/(projectname)
Tracker:example.com:6969
IM:example.com
IMAP:example.com
SMTP:example.com
Tor:example.com
Wiki:example.com/(wikiname)

You'll need an actual domain name for most of these services to be meaningful to the outside world.

System Preconfiguration

There's a bit of configuration we need to finish before installing and setting up the services - the default installation isn't quite fit for a plug-server, so we'll make those changes now. These aren't strictly necessary to set up any of the services, but they should help your plug-server live a longer, happier life.

Log Rotation

System logs contain a trove of information and are fantastically useful for keeping track of what's going on in your system both while it's running and after something goes terribly wrong. Unfortunately this same information also fills up your hard-drive and can be used to violate the privacy of your clients. Thus, it's good practice to keep only the minimum amount of information necessary to diagnose problems.

Fortunately, the "logrotate" package does this job very well and is installed by default. I changed a few of the default settings to make it more suitable for my purposes:

daily:The default settings rotate through logs, one a week (with the weekly setting). To avoid logging as much data, I'll rotate them daily.
size:The maximum size per logfile before we rotate: 10M seems like a good default. That shouldn't be so small as to switch logfiles so frequently that I burn out the SD-card, nor should it be so large as to fill up the drive.
rotate:How many logs to keep. I set mine to 2, down from 4. The current one and the last one are good enough for me. That should be enough to help me debug while avoiding keeping as much privacy compromising data as possible.

To further protect clients' privacy, consider using the Cryptolog Apache plugin that anonymizes the IP addresses stored in your system logs. Unfortunately, I don't know of any related services for other loggers, like the BitTorrent service.

References
  • /usr/share/doc/logrotate

Anomaly Warnings

Now that we've configured the system to keep appropriate activity logs, we might as well do something useful with them. For example, why not have your system email you when something's wrong? Who doesn't like self-aware systems? Importantly, we won't be able to use the emailing functionality until the SMTP Server is configured, but that'll come along soon enough :)

All the settings we need are controlled by the "logcheck" package's /etc/logcheck/logcheck.conf file. If nothing else, you'll want to set the SENDMAILTO variable. Make sure you follow the directions though, send the email off-site. Most attackers creating logfile anomalies are probably good enough to also remove both the log-files and logcheck emails delivered locally. Delivering logcheck emails locally assumes your system could be compromised while your email would remain undisturbed, a foolish assumption.

Mail it to another email account. Mail it to your friends. Whatever you do with it, make sure it doesn't stay local.

Intrusion Detection

Because you never want to see your homepage read "Pwned!", you might as well be informed when it happens. As I mentioned earlier, when you run a network-accessible computer, you're vulnerable: it's not if someone cracks your system, it's how long till they do it again. Hopefully, that'll be a while.

You'll want to get your box off the network as fast as possible once this happens. Hopefully your box (or other responsible system) will still be in operation long enough to tell you that it's been compromised. Who watches the watchers, and all that.

A few preventative programs can be prepared by installing the following packages:

  • harden-clients
  • harden-environment
  • harden-tools
  • ntpdate

These install the useful utilities noted in the following sections. Before installing these packages (or at least, before configuring Tripwire) go into single-user mode and disconnect your system from the Internet to keep your Tripwire passwords from prying eyes.

Tripwire

Tripwire checks for modifications to important system files and reports back any changes. This is where the multiple partitions that we set up earlier really come into play. If you put your system into run-level 1 before installing tripwire, you should be able to create both the system and local keys without concern.

Important Settings

Now that Tripwire's prepared, keep /etc/tripwire/twcfg.txt (link) in mind. This contains both locations of important files and details about your system's email service. Specifically: if you set up TLS or SSL encryption for your email service and disable SMTP port 25, Tripwire won't be able to email you updates.

Tripwire's First Run

To prepare Tripwire for use, you'll need to initialize the database:

# tripwire --init

Now, Cron will mail you periodically with the output of:

# tripwire --check

Make sure Tripwire can actually email you, with:

# tripwire --test --email (your email)
Preserving the Tripwire Database

During the installation, Tripwire warned me that I'd want to make sure important system files couldn't be modified from the host system. To do that, I have two options:

  1. Mount /var/lib/tripwire (link) and /usr/sbin (link) as read-only partitions.
  2. Use networked storage to access these directories on another system.

I don't have another system, so it looks like I'm going with the first option. There's a tradeoff, though: mounting those directories (particularly /usr/sbin) as read-only prevents the system automatically upgrading installed packages. The directories must be manually mounted as read-write before packages can be updated.

To mount those directories as read-only, add the following to your /etc/init.d/rc.local (link):

mount -o remount,ro /var/lib
mount -o remount,ro /usr/sbin

You'll probably want to comment out those lines until you've finished configuring your system: otherwise you'll need to remount the directories as read-write each time you install or update packages.

Check Security

From the "checksecurity" package, this lovely little tool runs up to four simple tests and reports any changes in your system between runs. Like logcheck's configuration, you probably want to pick a different email recipient in /etc/checksecurity.conf (link), but that's the only noteworthy change I saw.

Rootkit Check

The rootkit checker, "chkrootkit", is an interesting, if trigger-happy, piece of software. It'll note at least one false-positive when the system is built out (port 4369, from epmd), but it's worth its weight in gold if it actually finds rootkits.

Package Checksum Verifier

"Debsums" verifies the package contents of installed package files against a stored list of MD5 sums. Granted, MD5 has been broken, but it'd still be difficult to fake without changing the package in a functionally noticeable way. No real configuration to speak of, outside of /etc/debsums-ignore (link).

Password Tester

"John" checks for weak passwords by iteratively hashing until it finds an entry already in a password file. It's basically conducting a garden-variety dictionary attack (less neat than a rainbow table attack, but still useful) and will tell you when one of your users has picked an easily crackable password.

Tiger

Tiger emails you periodic, awesome, vulnerability reports. Depending on your threat model, the various warnings may or may not be worth following up on. Nothing's jumped out at me yet, but it's a great way to keep tabs on the system. Many checks are enabled by default (not all, contrary to the documentation), and if you need any explanations, just run:

$ tigexp (checkId)

Make sure you go through the Tiger documentation. It's not as obvious as it looks, there are significant possibilities for false positives. Customize /etc/tiger/tigerrc (link) and /etc/tiger/cronrc (link) as necessary, changing at least:

  • Tiger_Mail_RCPT
References

Backups

Using a good backup strategy is crucial to the health of your data. When your server goes boink, either your data is recoverable from somewhere else, or it isn't and it's gone forever.

A good backup strategy tries to answer a few basic questions:

  • How many days of work am I willing to lose?
  • How much manual trouble am I willing to go to to secure my data? For example, backing up to a regularly removed external drive is more secure (your backups aren't at risk when your server's cracked), but it requires physical access to the plug-server.
  • Do I care more about availability or privacy of my data? An encrypted backup is just about useless to an attacker without the key. An encrypted backup is just about useless to me after I lose the key.

The normal choice for backups is usually the rsync tool, used to backup important directories, like this:

$ rsync -av /home /media/backup
$ rsync -av /etc /media/backup

If you back up to a FAT formatted backup device, you'll run into trouble with files the device doesn't support, try using an EXT formatted partition instead.

I'm tempted to use distributed revision control tools to keep track of my backups, but I won't, because that keeps all copies of the previous history on the local device. The full repository backup generally takes about as much space as the uncompressed files, essentially cutting your available drive-space by half. That would be a fine post-backup solution, however: use rsync to get changes to a backup device and use a DVCS, like Mercurial, to take snapshots of those changes. Unfortunately, drive-space is just too dear on a 13 GB partition to have an ever-decreasing amount of space available.

A simple backup script is an effective solution. One exists in root/usr/local/bin/backup (link) and is installed automatically with the "anomaly" service. That one backs up specified directories and MySQL databases. It can be run via cron if you don't back up the databases or store the MySQL password in the script. The default setup doesn't backup databases, because I don't know your MySQL root password.

Setting Up Services

I've come up with a few basic tips for configuring services:

RTFM:Overall, Debian has fantastic documentation for its software. This documentation is found both in man-pages and the /usr/share/doc (link) directory. Those are always the first places I look when considering setting up a new service.
Bugtrackers:This comes back to knowing your system. Before enabling a new piece of functionality, review both the Debian and upstream bugtrackers for high priority bugs that might affect you and any workarounds you'll need to implement.
One at a time:Don't try to set up more than one service at a time. Get one service working, and working well, before you move onto the next. That'll guarantee you actually understand what you've set up and haven't missed any important holes.

Keeping these tips in mind will help simplify the process of setting up your services.

Firewall

So, what services do we have running and what ports do we need open? While setting up the system, it's probably wisest to close all the open ports and open them selectively, only after enabling the relevant services. Helpfully, some of the services are served through the web service and can reuse that configuration. Based on the services I'm planning to use, I'll need to open the following ports:

Service Name Delivery Service Incoming TCP Ports Incoming UDP Ports Outgoing TCP Ports Outgoing UDP Ports
Web Server (self) 80 - 80 -
Blog Server Web 80 - 80 -
Project Server Web 80 - 80 -
SSH Server (self) 22 - 22 -
SMTP (out) Server (self) 25 - 25 -
IMAP (in) Server (self) 143 - 143 -
IM Server (self) 5222, 5269, 7777 3478 5222, 5269, 7777 3478
Torrent Tracker (self) 6969 - 6969 -
Torrent Client (self) 6881-6889 - 6881-6889 -
Anonymizer (self) 9001, 9030, 9050 - 80, 443, 9001, 9030, 9050 -
Proxy Server (self) 8123 - 8123 -

Currently, netstat -l (in combinations with -a, -p, and -n, or nmap ${ipAddress}) shows me something a little disturbing. These aren't the only ports that are active. It looks like we'll have to remove some unused services.

I've configured the firewall using the arno-iptables-firewall package. It's dead simple. Just specify the interfaces to control and the port ranges to open per protocol. Then, you're done. This won't be as helpful for desktop systems because it doesn't seem to support multiple profiles, meaning that you'll have to reconfigure the firewall every time you want to add a new service. That doesn't sound fun with a desktop system, where services flicker on and off like candles in the night. That's fine for fairly static servers though.

The best part about the Arno's firewall is that ports have to be whitelisted. This means that if I forget to configure a few services they'll be unavailable, which is better than me being vulnerable.

Firewall Plugins

You can enable or write any number of plugins for Arno's firewall. If nothing else, make sure you enable the "ssh-brute-force-protection.conf" plugin and make sure all your passwords are at least half a year long.

Removing Unused Services

After taking a peek at netstat (in the firewall section), I realized that there were services running that shouldn't've been. The default Debian install does install a few things (in its 200-ish packages!) that we don't need. If I'm hoping to run a secure system, I'll need to minimize the attack surface.

First to go is telnet. I mean, really. Telnet? Just... No. Even if I did want to play insecure MUDs, I wouldn't want to play them through the plugserver. I'll play my MUDs over SSH, thank you very much :)

Next to go is portmap, which also removes NFS. It may be security through obscurity, but I don't feel like running a service that advertises other services, and I'm certainly not looking for a networked file system at this point. Both of these are pretty quick to remove, by uninstalling the telnet and portmap packages.

Third to disappear is the POP3 listener. Who even runs POP3 anymore? Just uninstall the "qpopper" package, and feel free to comment out the pop-3 line in /etc/inetd.conf. Now, restart the inetd server with /etc/init.d/openbsd-inetd restart.

At this point, the only other service remaining is "bootpc", running on UDP port 68. That's part of the standard DHCP IP configuration protocol, which your server uses to request or renew its IP address on the local network. We'll want to keep that one around.

Now though, according to netstat and nmap, the only services accepting connections are those I explicitly approve of.

Automatic Updates

As with everything, Debian makes this amazingly simple. Just copy the appropriate lines from /etc/cron.daily/apt to /etc/apt/apt.conf.d/02periodic and your system will keep itself updated. I won't be able to use the "unattended-upgrade" package because several important directories are normally mounted read-only after the Tripwire installation.

I also swapped cron for fcron because (especially during testing), the server won't be running all the time.

Then I saved the config file. And I was done.

SSH Server

That part's already done. You configured that when you selected "SSH Server" in tasksel. To test, you should be able to run, on the commandline:

ssh (user)@(server)

It should connect happily, assuming your host has access to your server.

If you're as paranoid as I am, you'll want to prevent root from logging in at all (requiring attackers to break two passwords to gain useful access to your system). Edit your /etc/ssh/sshd_config and make sure you include:

PermitRootLogin no

Web Server

First, read through the documentation so you know what you're doing: /usr/share/doc/apache2/README.Debian.gz (link) and (after you've enabled the service), http://localhost/manual.

I'm running Apache with a default configuration. Apache's initial configuration, in and of itself, seems secure. After reviewing Debian's bugtracker and Apache's bugtrackers for each of the features I will be using, the only bug of note was an unconfirmed memory-nomming bug that might be normal operation. The rest haven't been followed up on in years.

This looks good to me. And if it's not, well, half the Internet is screwed anyway. My low-value target is way down on the list.

I'll have to enable content negotiation at some point, it looks awesome.

I'll also want to customize the site's landing page, making sure it blends in with the rest of the site's theme. I'll also want to make sure it looks good, because otherwise, what the hell am I doing running my own server?

Simple Errors

Make sure all the files in /var/www (or whatever directory Apache serves) are owned by the www-data group and are group-readable. Otherwise, Apache can't serve them correctly and merely returns a blank page. That's notoriously annoying to debug.

To avoid "Can't determine server name" errors when starting Apache, before I purchase the domain-name, I'll need to manually specify the server name in /etc/apache2/httpd.conf:

ServerName example.com

Blog Server

I'm using WordPress as my blogging tool. Originally, I was considering either PyBlosxom or WordPress, but, when Googling both together, almost all the results were about transitioning from PB to WP for several reasons, including:

  • WP has better comment system spam-protection.
  • Management ease (sure, WP isn't files, but the meta-data management seems simpler).
  • Permalinks
  • More RAM efficient (PyBlosxom apparently starts a new Python process for every request).

These outweigh the advantages that I see in PyBlosxom: that it's written in Python (and thus easy to extend) and simple to post to (create a file).

Installing WordPress

To start installing WP, I followed these blogs. They're a bit outdated, and the process is easier now than it was then.

First, install WP and MySQL:

# apt-get install wordpress mysql-server

Configuring WordPress

Make sure WP knows how to communicate with its MySQL database:

# bash /usr/share/doc/wordpress/examples/setup-mysql -n (username) (example.com)

Now, in your /etc/wordpress directory, review the "config-(something).php" file you created and make sure it's in the www-data group. If the Apache user can't read the config file, it merely serves a blank page. Nice and annoying to debug.

Fortunately, I only need to configure a single blog on this box, so the rest of the configuration is pretty simple. Now, we enable permalinks. In your /etc/wordpress/htaccess file, uncomment the "Configuration for a single blog hosted on / (root of the website)" section, if you want to host a blog on the site's root:

##
## Configuration for a single blog hosted on / (root of the website)
##
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

If you want to do something fancier, you'll need to make the htaccess file part of the www-data group and group writable. You'll then be able to customize the permalink settings in Wordpress itself with the following change to Apache's default site config:

<Directory />
    Options FollowSymLinks
    AllowOverride All
</Directory>

Configuring Apache

WordPress is now ready to go. Really. We just need to configure Apache to work with WordPress.

Edit your site's new Apache configuration in the /etc/apache2/sites-available directory. It doesn't matter what you call your new site, just as long as it's a file in that directory. I used /etc/apache2/sites-available/blog (link).

Since we never specified a particular IP in the configuration files, this setup should work for any dynamic IP. If that's not the case, this script might be helpful.

Now, let's enable our new site:

# a2ensite blog
# /etc/init.d/apache2 reload

Since you've enabled permalinking, we might as well use it:

# a2enmod rewrite

Finally, restart Apache and enjoy!

# /etc/init.d/apache2 restart

Now, test out (http://example.com/blog) or administer (http://example.com/blog/wp-admin) your blog.

IMAP Server

I'm sticking with the default UW-IMAP server for now. Why? Because it seems to work and takes near zero configuration. Just run dpkg-reconfigure uw-imap.

I decided to go with TLS (port 143) support, only. Specifically, because I only need to leave a single port open for insecure (plain-text) and less-insecure (encrypted) communications. With SSL, I'd need to leave port 993 open for less-insecure communication and port 25 open for insecure communication. Having two more moving parts (one port, one additional interaction) to keep in mind just seems like a bad idea.

SMTP Server

Before configuring Exim, preplan your email server's configuration:

# dpkg-reconfigure exim4-config

Given that the default IMAP server (UW-IMAPD) doesn't support the maildir format, I'll stick with mbox for now. Maybe I'll change to Dovecot at some point and use maildir then, but that's only once I prove UW-IMAP doesn't meet my needs.

Unfortunately, I can't just send emails. With my current dynamic DNS setup, I need to route outgoing mail through a smart host before any mail servers will actually accept the mail. Also, if your ISP, like mine, blocks traffic on port 25, you'll need to go through the files you configured in the system preconfiguration section and change the SMTP port everywhere.

IM Server

For an IM server, I'm going with the only IM protocol I can host myself: Jabber (XMPP). There are lots of servers available and I've had good experiences with both ejabberd and OpenFire. OpenFire runs on Java and thus takes up a fair amount of memory (RAM). With memory at a premium on this system, I'll go with ejabberd, which should take up less RAM overall.

Setup was simple: I installed ejabberd and ran dpkg-reconfigure ejabberd. That was it. Helpfully, you can host the Jabber server at any server on your domain name. I'll be using "im.(server.name)" just to keep the services separate.

Worryingly, running ejabberd opens an absurd number of TCP ports:

Port Process Explanation
4369 epmd "It is strongly recommended to block the port 4369 in the firewall for external connections."
5222 beam The standard Jabber port used for connecting the client to the server. Should be open.
5269 beam The standard server to server (S2S) communication port. Should be open.
5280 beam Web-based admin-interface. Don't know if you can do all the stuff you can do there from a Jabber client. Doubt it, and even if you can, it's really convenient. This has already been firewalled off of my server.
7777 beam Standard Jabber file-transfer port. Might as well open it, I suppose.
54829 beam This, and other high numbered ports are used for epmd communication between clustered nodes and shouldn't be accessible from outside the server cluster. So, blocked from outside.

The only thing this is really missing is OneSocialWeb integration. XMPP supports mutually-agreed following, and with support for sending messages to groups (Google's circles of friends), you can achieve my primary use-case: a limited (private) broadcast to known contacts.

Interestingly, public registration for the XMPP server is disabled by default but that can be changed on the Access Rules page. If you keep public registration disabled, you'll have to manually create accounts for anybody you want to let on. This makes it quite hard to be a community host, so it's a trade-off.

Project Server

For my code hosting requirements, I'm going with Trac. It seems pretty easy to handle and I've always liked the look of Trac sites. Plus, ever since my experience with Pylons (now Pyramid), I've had a soft-spot for Genshi. I've always wanted to put together a project with Pylons, Genshi, and ZODB (or some other ODB) - but that's neither here nor there.

For my Trac install, I'll be using the standard Trac package, along with the Spamfilter and Mercurial plugins. After installing the right packages, Trac is really easy to enable! Just follow the readmes.

Once Trac itself is configured, you'll need to create an Apache web directory for each of your projects. You do this by first creating a trac-data folder, owned by the www-data user, that just kinda sits there and contains the dynamic data that makes your Trac site a repository. You'll export some of those resources to a trac-www folder, which Apache will use to serve your project.

  1. Prepare a trac-data folder for each of your current projects. The www-data group needs read and write access to the folder, so think about where you're putting it, possibly /home/www-data/code/ or /var/code/:

    # trac-admin /path/to/trac-data/folder initenv
    # chown www-data:www-data /path/to/trac-data/folder
    
  2. Export that folder for Apache:

    # trac-admin /path/to/trac-data/folder deploy /path/to/trac-www
    # chown www-data:www-data /path/to/trac-www
    
  3. Upgrade as necessary (dependent on error messages):

    # trac-admin (dir) upgrade
    

Publishing to Apache

You'll need to create another Apache config in sites-available and enable it again, just like we did for WordPress:

<VirtualHost *:80>
    ServerName code.example.com
    WSGIScriptAlias / /var/www/code-www/cgi-bin/trac.wsgi
</VirtualHost>

If you only want to host a single project, you're done! Otherwise, poke around your new site for a few minutes, then continue.

Multiple Repositories

This gets somewhat complicated. I was originally planning on wimping out and just dumping a static HTML file in the directory and updating my Apache config every time I published a new project, for simplicity. Trac v0.12 apparently fixes a lot of the multiple repository ridiculousness in ticket 2086, but it'll be a few years before that's out in Debian Stable.

However, I pushed through and now have a decent automagic project setup. My Apache config dynamically includes the necessary WSGIScriptAlias lines for any Trac projects I have so they all load properly without any manual intervention.

I require three customizations specific to hosting multiple projects from the same server:

  1. I need a landing page that automatically displays the list of available projects users can explore.
  2. I'd like to use a single login database for all projects. This'll simplify things for me, if not anybody else.
  3. The web server should automatically detect the available projects and serve them appropriately. Each project should be served in its own sub-directory on the server.

Requirement 1 is a two-parter, while requirements two and three can be handled together (and need to be, for some unknown reason).

First, the landing page. It's an executable CGI file that just parses the list of directories and shows them to the user. This is stored in /var/www/code/index.cgi (link). Improvements will, of course, separate out the descriptions into a separate file. Don't forget to set the group-executable bit on the file.

Secondly, each project is served from its own sub-directory on the web server. Each project will share a single set of logins, for now. This is handled in the site's Apache config file:

<Perl>
    #!/usr/bin/perl

    # trac file path location
    my $trac_path = "/var/www/code/";
    # trac URL on the server, according to web-clients
    my $trac_location = "";

    opendir(TRAC_ROOT, $trac_path)
        or die "Unable to open Trac root directory ($trac_path)";

    while (my $name = readdir(TRAC_ROOT)) {
        # since -d doesn't work, kludge it.
        # same kludge as in /var/www/code/index.cgi
        next if $name =~ /\./;

        push @PerlConfig, \
            "WSGIScriptAlias /$name $trac_path/$name/cgi-bin/trac.wsgi";

        $Location{"$trac_location/$name/login"} = {
            AuthType => "Basic",
            AuthName => "\"trac\"",
            # AuthUserFile => "$trac_path/access.user",
            AuthUserFile => "/home/www-data/trac.htpasswd",
            Require => "valid-user" };
    }
    closedir(TRAC_ROOT);

    __END__
</Perl>

If you don't change the config, you'll also need to install the "libapache2-mod-wsgi" package. Also, the AuthUserFile line locates the username (login) file, so you'll probably want to customize that.

Maybe I'll change the second half of the decision later and give each project its own logins, but for now, I don't expect it to be an issue. It seems quite difficult to muck things up on a Trac site with a non-administrative login.

Error Messages

Warning: Can't synchronize with the repository (.../ does not appear to contain a Mercurial repository.)

This section began life as an email to the Trac-Users mailing list:

This issue came up a long time ago on this mailing list, I just ran into it tonight, and I've just figured it out. If you see a message telling you:

Warning: Can't synchronize with the repository (.../ does not appear to
contain a Mercurial repository.)

This is actually a file permission issue. My repository was set up as:

directory owner perms
repository/ me:me 0600
.hg/ me:me 0700
afile me:me 0600
asubdir/ me:me 0700
otherfile me:me 0600

For some unknown, inexplicable reason, my project had 600 (unreadable) permissions. So, the www-data user (the Apache user) was unable to read and thus unable to serve the directory.

Fixing this was relatively simple. I just had to:

  • Give everything in the directory all the read permissions.
  • Give all the subdirectories executable permisisons (maybe not strictly necessary, but can't hurt).

Afterword, it looked more like:

directory owner perms
repository/ me:me 0755
.hg/ me:me 0755
afile me:me 0644
asubdir/ me:me 0755
otherfile me:me 0644

Hope this helps somebody in the future, maybe even openhex,

Nick

Tor Relay

Please Note:

As proud of this section as I am (and, believe me, I am), please ignore it completely. Please use the Tor Browser Bundle for your Tor-based-browsing instead. The TorButton has been deprecated because using the same browser for normal and Tor-based browsing identifies you. That will harm you. DON'T DO THAT!!

Have I made myself clear? :)

If you want to set up a node routing traffic on the Tor network, however, these instructions are still useful.

Running a Tor relay is a fantastically useful proposition both for freedom of speech and for (slightly) anonymizing your own traffic and those of your visitors. Of course, you'd get the most anonymizing power if you ran an exit node, but it's a trade-off between various risks and resources.

First, you definitely need to read up on Tor, the proxy server Polipo, and how they work before continuing. This is probably the service where the most Bad Things (TM) can happen because of a misconfigured server: you'll end up harming other people who rely on your service, not just yourself.

Homepage:https://torproject.org/
Overview:https://torproject.org/about/overview.html.en
Manual:https://torproject.org/docs/tor-manual.html.en
Install Guide:https://torproject.org/docs/documentation.html.en
Wiki:https://trac.torproject.org/projects/tor/wiki/
FAQ:https://torproject.org/docs/faq.html.en

In this section, you'll notice that I put no example files in the repository. That's to make sure there are no files to get outdated as recommended or default Tor and Polipo settings change.

Configuring a Tor End-Node

Configuring an end-node is the simplest method of using Tor. In this mode you're the only person on the Tor network that uses your system; only your traffic is delivered to and from your system from the Tor network, you aren't routing anybody else's traffic. It's a fairly effective and easy method to improve the privacy of a single system. Try it on your primary computer first, to get used to editing your torrc.

First, follow the official Tor configuration guide. Make sure you install at least the following packages:

  • tor
  • tor-geoipdb
  • polipo (and, optionally, privoxy)
  • ntp or ntpdate (preferring NTP for always-on systems)

If everything's hosted on a single system (if the Tor clients and server are on the same machine), you shouldn't need to modify any config files. Just install these additional packages and you're good to go:

  • iceweasel
  • xul_ext_torbutton

Configuring a Tor Router for your Local Network

Once you're comfortable with running Tor on a single system, you might consider using your plugserver as a relay for the rest of your local network. Your plugserver will be routing all your Tor traffic, though the plugserver still appears as an end node to the rest of the Tor network.

To configure this type of setup, again, hit the documentation hard. Unfortunately, these guides assume that you're running Tor from a single system, not that you're routing your local network's traffic. We'll need to make a few changes to account for that twist. Both Tor and Polipo require the server's IP address to either be statically defined in the config files.

Your local network's client systems need to have installed the Tor Browser Bundle. You may or may not choose to connect your Tor Browser clients to your Tor node directly. Don't use TorButton. That's all I'm saying on this subject.

Configuring Tor for Local Network Routing

Configuring Tor itself as a local-network relay is relatively simple, involving only two configuration files. In your /etc/tor/torrc set a SocksListenAddress that allows your plugserver to route for the local network generally, and not just the localhost:

SocksListenAddress (local-network static IP)
Configuring Polipo for Local Network Routing

Polipo has decent Tor admin information including recommended values for some of these keys, while Tor's old FAQ covers a couple settings. You'll need to configure polipo's settings in /etc/polipo/config, including:

  • proxyAddress
  • allowedClients
  • socksParentProxy

socksParentProxy must be set to the listening IP and port of your Polipo server (your plugserver's IP address, e.x.: 192.168.1.4:9050). Set up the allowedClients with a bitmask for the local network for easy configuration. I'd also recommend disabling the disk cache and local interface access with:

  • diskCacheRoot
  • disableLocalInterface

Polipo has decent Tor admin information including recommended values for some of these keys. The most difficult piece of this is that you need to modify socksParentProxy based on your current local-network IP, which is noted only in Tor's old FAQ.

Configuring a Tor Relay for the Whole Tor Network

Regardless of how you route your local network's Tor traffic, it's still an end node. To be of more use to the network, and to make your Tor traffic somewhat more anonymous, you can become a Tor Relay. Configuring a relay server is fairly simple. You only need to change a couple lines in your torrc, including:

And, finally, you'll need to configure your ExitPolicy. For now, I'm just playing middleman as a Tor relay and preventing any traffic from exiting my server (routing only between Tor nodes) with this policy:

ExitPolicy reject *:*

If you're braver than I, try following these tips for a happy exit node experience. Make sure you follow their recommendations about notifying your ISP and setting up reverse DNS and exit notices, etc.

BitTorrent Tracker

BitTorrent is an excellent way to distribute large files for a low overall bandwidth cost. The biggest drawback is that users who don't have access to BitTorrent won't be able to download the files. However, we can work around that limitation by serving files both over BitTorrent and HTTP, preferring BitTorrent when possible.

Importantly, this might be a mistake. I don't know what the RAM or processor requirements are of a busy BitTorrent tracker, and might be able to exhaust my plug-server with too popular a tracker. But, that's what this is: an experiment.

I have a couple arbitrary requirements for my BitTorrent tracker:

  • The tracker should have its own user and home directory to make management easier and safer. Nobody runs trackers or seeds as root, for good reason.
  • Files should be served from the "tracker" server.
  • Should start and stop both tracking and seeding on system startup, without manual intervention.

I can start the tracker and seed easily enough, myself, though I should configure a startup script to guarantee it works on login. The startup script is modeled after Boe Miller/Jeff Reifman's Debian Woody BitTorrent Manager. I'm trying to get this pushed into the standard BitTorrent package so, in the next release of Debian, you'll just have to change a few lines in the config file to be up and running.

First, you'll need to install the bittorrent package named, appropriately enough, "bittorrent". Next, you'll need to setup a BitTorrent user account: an account to run the daemon behind the scenes, so you aren't running the service as root. To add a bittorrent-tracker user, run adduser bittorrent-tracker.

Now, you need a control script to start and stop the BitTorrent process, like /etc/init.d/bittorrent-tracker (link). You'll, also want to be able to configure the service somehow, and that's where /etc/bittorrent-tracker.conf (link) comes in. You'll need to copy these to the appropriate system locations from the repository.

You'll need to change the SERVER and BTENABLED lines, if nothing else.

Now, link /etc/init.d/bittorrent-tracker to the right places in your /etc/rc*.d/ directories. To start it automatically:

  • /etc/rc2.d/S20bittorrent-tracker/
  • /etc/rc3.d/S20bittorrent-tracker/
  • /etc/rc4.d/S20bittorrent-tracker/
  • /etc/rc5.d/S20bittorrent-tracker/

To stop it automatically:

  • /etc/rc0.d/K01bittorrent-tracker/
  • /etc/rc1.d/K01bittorrent-tracker/
  • /etc/rc6.d/K01bittorrent-tracker/

At this point, the BitTorrent tracker starts automatically whenever the system comes up. So, your torrents can be tracked. However, you'll also want to make sure that people can download the torrents themselves. Again, Apache to the rescue. You can just use a fairly basic tracker configuration to serve torrent files from tracker.example.com or example.com/tracker. Actually, most of the tracker configuration is pulled directly from Apache's default (example) site.

Now to start hosting torrent files, just:

  1. Place files you want to start hosting in /var/bittorrent-tracker/incoming
  2. Run /etc/init.d/bittorrent-tracker make-all

Your files will then be turned into torrents. The files will be moved to /var/bittorrent-tracker/shared/files when they're converted, and the torrents will be available from your tracker server. BitTorrent does need to calculate the checksum for every piece of every file, so it might take a while to finish all the files.

MicroBlogging / Status Updates

As far as micro-blogging or social networks go, there are a couple options. The ones I'm most acquainted with are:

StatusNet:A Twitter-replacement.
Gnu Social:Built on top of StatusNet, but not updated since May 2011, when the repository was created. At this point, it's an outdated copy of the StatusNet code. The wiki has seen lots of interesting updates, while the mail archives seem dead.
Diaspora:A Twitter/Facebook-replacement that doesn't seem to integrate well with other services.
OneSocialWeb:Not really in development? Possibly the best solution, architecturally, but the least complete as far as I can tell. There isn't an ejabberd client (only an OpenFire client), and my inquiry to the ProcessOne folks (developing the ejabberd client) hasn't yet been answered.
Buddy Cloud:Looks nice, I'll try this one after Friendica.
Friendica:Not the prettiest of interfaces, but it seems to integrate with more alternate services than any other status network. If I'm going to get folks out of Facebook, we'd better be able to tell them about it.

There seems to be a war between approaches: XMPP based or FOAF-Webfinger-salmon-Zot based. Only XMPP seems to support end-to-end GPG encryption at the protocol level, while the other approaches seem to integrate between other web-services better. Once somebody creates a (protocol) <-> email bridge with a decent web interface, the game'll be over. XMPP <-> Webfinger could be handled through a plugin.

StatusNet

I'm moving away from StatusNet because it doesn't seem to support private communications. Great for communicating publicly within a community, but that's not my use case. It also can't seem to communicate between nodes, StatusNet or otherwise.

Setting up StatusNet is surprisingly simple! Basically, you'll need to:

  1. Create a new Apache configuration for your StatusNet install and enable it.

  2. Install the prerequisites.

  3. Extract the stable version to a directory.

  4. Fix file permissions and ownership.

    • Particularly important is setting:

      chmod 640 config.php
      

      Otherwise, it's 644 by default, letting anybody read your database password.

  5. Create the StatusNet database (the mysqladmin command).

  6. Correct the StatusNet user's permissions (the mysql command).

  7. Point your browser to your StatusNet install and run the installer.

If the installer won't run successfully, check out the strange errors and their solutions. If you select a "Private" site (the default), you'll need to invite users before they can register. Private sites offer no public registration, which makes sense, but isn't well explained until you go to http://example.com/mublog/main/register/.

Configuring Fancy URLs

No, I don't really know what difference they make, but they're neat, I think. After enabling "fancy URLs", you'll need to edit Apache's /etc/apache2/sites-available/default (link) to allow overrides:

<Directory /var/www/>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride All
    Order allow,deny
    Allow from all
</Directory>

Friendica

TODO:Fill this in once I build out the service.

Unfortunately Friendica doesn't have official Debian packages. It does have an unofficial Debian install script for new systems, which is similar to the work done here (with different configuration choices). Unfortunately, they don't integrate very well: you'll end up running two different web-servers at once.

Friendica actually has two sets of official instructions, neither is completely documented. I'll fill in a few holes here:

  • Don't run the setup process through HTTPS. Run it through HTTP, otherwise the final test, the "URL rewrite" test will fail without explanation. If you're affected by this issue, you'll see errors in your Apache error log about:

    File does not exist: /var/www/friendica/install
    
  • In your Apache's configuration, make sure the Friendica install directory can override the FollowSymLink option. Put this either in the default configuration or a Friendica-specific configuration file:

    <Directory /var/www/friendica>
        Options FollowSymLinks
        AllowOverride all
    </Directory>
    
  • Make sure it's installed to a /var/www subdirectory. It doesn't seem to work at all as a symlink, which means it's impossible to install to /home, which mangles my partition scheme. Can't win 'em all.

  • If you don't have email delivery, you... can't use it. Well, that almost kills it for me. Fortunately, I can send emails, I just can't receive them.

  • Also, it's PHP, which means it's unlikely to get into the FBX proper.

SSL Connections

SSL can not be trusted. The fundamental flaw with centralized-trust-for-sale systems is that, well, everyone has to buy their (and your) trust from one place. The $100 pricetag will deter only the least well-funded criminals, and that's assuming that nobody makes any honest mistakes.

But, perhaps a broken bicycle is better than none at all, eh? Using SSL means that only the parties that act in bad faith (and their many friends) will be able to snoop on the data transfers, it'll still be invisible to everybody that neither of us trust. So, maybe your data won't be part of Google's newest ad-network because of my work. We can only hope.

It's possible to make your clients less vulnerable to man in the middle (MITM, certificate faking) attacks, and that's by telling your clients about the appropriate certificate with alternate methods, allowing them to make sure all the methods agree. There are a couple ways you could do this:

  • Display your SSL certificate's fingerprint on the website itself.
  • Compare the user's current certificate to your reference certificate, displaying a warning banner when the two don't match.
  • Encourage your users to visit your site via Tor or other anonymizing service.
  • Encourage your users to install SSL certificate verifying extensions.

The point of these maneuvers is to make the correct certificate easier to identify and thus harder to fake.

TODO:Deploy my own signed certificate for each of the services I run. Document that here.
TODO:Perhaps Monkeysphere is the appropriate answer? But how? Documentation reading time!

Enabling SSL in Apache

Enabling SSL in Apache is a very simple process, see /usr/share/doc/apache2.2-common/README.Debian.gz (link) for details. You'll need to enable the SSL module, enable the SSL default site, and finally restart the service:

# a2enmod ssl
# a2ensite default-ssl
# /etc/init.d/apache2 restart

To use a custom snakeoil certificate, change the default-ssl config's SSLCertificateFile and/or SSLCertificateFile lines.

TODO:Document PageKite SSL setup when I do get it figured out.

PageKite and SSL

Requires custom domains. Not sure how to serve for multiple domains... yet? But, it seems specific to the Apache config's vhost.

Wiki

Thomas Ruddy requested a wiki for his FreedomBox. After polling the discuss list, ikiwiki was the clear winner. Luckily, it's already packaged for Debian. Make sure you install the recommended packages because it, unfortunately, requires you to compile software to set up a site. This means you now have a compiler on the system, making it easy for people who gain access to your server to test and compile their own binaries on the system, which is unfortunate, but necessary for this wiki.

There are a couple disadvantages, so I might experiment with others and change systems:

  1. The wiki setup script takes no parameters. You must edit the script to change the default setup. Narsty.
  2. Because of the above, you have to deal with the fact that the wiki is installed into a user's home directory by default.
  3. That means that you need to make sure Apache will look for CGI files in users' home directories. Security/administration eww.

This does bring up a question of box ownership, though: is a communal resource owned communally? If everybody gets a user account (so they can receive email, etc), they can also setup a wiki (unless I change the setup file). That also implies that people can install and run their own CGI files on the server. A malicious CGI script could easily bring down a server. I don't think I'm comfortable with that. That's also in line with the rest of the setup so far: administrators decide the services available.

To enforce this decision, I'll first need to modify the setup script to use a reasonable directory setup (storing public-facing things in /var/www/wiki/ instead). I'll save my copy of /etc/ikiwiki/auto.setup (link) as /etc/ikiwiki/sitewide-wiki.setup:

destdir => "/var/www/$wikiname_short",
url => "http://$domain/$wikiname_short",
cgiurl => "http://$domain/$wikiname_short/ikiwiki.cgi",
cgi_wrapper => "/var/www/$wikiname_short/ikiwiki.cgi",

Once you've configured the setup file to your heart's content, run the ikiwiki set up for the wiki. If you use the default directory structure (with auto.setup), and don't put your wiki under /var/www, you'll need to enable the "userdir" Apache module. I feel, however, that's a bit unsafe, and haven't done that: it can expose users' home directories to the general Internet if you aren't careful. I'd rather prevent, than patch, that hole.

Testing

This section contains a few tips for testing I've come up with along the way.

Use /etc/hosts

If you don't yet have a domain name or still need to rekajigger your router correctly, you can use the /etc/hosts file to your advantage. Just plug your domain name and IP address into the hosts file of any computer you want to test with, and you're golden. That computer will always then resolve the DNS record to your test system. It's a good strategy for testing with both your plug computer and your primary system.

If you add additional servers to your setup, make sure they're specified in both your primary and plug server's hosts files. For example:

192.168.0.3    example.com www.example.com

You need to specify more than just example.com, as each server might actually resolve to a different server, so they need to be listed explicitly. It makes sense when you think about it.

You'll also need to add that in Apache's config. Since all my hosts are virtual hosts, I just plugged it into the generic config /etc/apache2/httpd.conf:

ServerName example.com

There went two days of testing.

VirtualBox Specific Concerns

If you're not lucky enough to have a plug-server already available (like me), you can fake it via VM. I used VirtualBox. It requires a couple hacks:

  • It's a pain to get running in a 64-bit host machine. GIYF, I set this up a long time ago and only remember changing some virtualization-related BIOS option.

  • You'll need to select the "Bridged" networking mode for your VM. Your VM will then request an IP address from your router and you'll be able to access it on the local network, like any other computer. This makes configuring everything a LOT easier.

  • Running a VM on a 64-bit machine makes it un-hibernatable. Apparently enabling VT-x/AMD-V causes hibernation to crash, on Linux, as of VBox 3.2 (as packaged in Debian Squeeze's stable release).

    The /etc/init.d/virtualbox-vm-control (link) script helps the issue. I've symlinked it in the following places on my host so it runs whenever the host system comes down:

    • /etc/rc0.d/K01virtualbox-vm-control
    • /etc/rc1.d/K01virtualbox-vm-control
    • /etc/rc6.d/K01virtualbox-vm-control
    • /etc/pm/sleep.d/01-virtualbox-vm-control

Discovering your IP Addresses

Figuring out your external IP address is a bit of a pain. Fortunately, I wrote a script that can do that for you, using any number of trusted IP address verifying hosts. The following script ignores hosts that error out and only reports an IP address if all of the hosts agree. If any hosts disagree, it also errors out. This is stored in /usr/local/bin/ip-external (link).

Another script also exists for finding your internal IP address: /usr/local/bin/ip-internal (link).

Finding the Server

Every once in a while, you'll need to find your plugserver on the local network. Try this command to find SSH servers on the local network:

$ nmap -p 22 --open `ip-internal`/24

Actual Hardware

I've tried installing this on actual hardware now! Enclosed are my experiences:

DreamPlugs:Not so hot. Both upgrade and system recovery could and should be easier.

Checklist

Probably the most useful part of all this: a checklist. This will allow you to verify that you've completed every relevant configuration step as you go.

  • [ ] Download a local copy of this repository.
  • [ ] Buy a plug server.
  • [ ] Get a domain name.
  • [ ] Install Debian.
    • [ ] Configure partitions.
    • [ ] Select SSH, Web, and Mail servers.
    • [ ] Remove unnecessary services.
    • [ ] Configure automatic updates.
  • [ ] Prepare the system for services.
    • [ ] Configure log rotation.
    • [ ] Configure logcheck
    • [ ] Drop into single-user mode (init level 1).
    • [ ] Install harden packages.
      • [ ] Create Tripwire passwords.
      • [ ] Initialize Tripwire database.
    • [ ] Set up auditing packages.
      • [ ] Check Security (checksecurity)
      • [ ] Rootkit Check (chkrootkit)
      • [ ] Package Checksum Verifier (debsums)
      • [ ] Password Tester (john)
      • [ ] Vulnerability Reporter (tiger)
    • [ ] Get out of Single-User mode.
    • [ ] Configure Backups
      • [ ] Define backup policy.
      • [ ] Stick to your policy's schedule.
  • [ ] Configure firewall.
  • [ ] Configure SSH server.
  • [ ] Configure Web server.
  • [ ] Configure Blog server.
    • [ ] Install WordPress and MySQL-server packages.
    • [ ] Setup WordPress's MySQL tables.
    • [ ] Configure Permalinks.
    • [ ] Enable mod-rewrite.
    • [ ] Setup Apache config.
    • [ ] De-prioritize the default config.
    • [ ] Enable Apache site.
  • [ ] Enable IMAP server.
  • [ ] Enable SMTP server.
  • [ ] Enable IM server.
    • [ ] Install packages.
    • [ ] Configure packages (with dpkg-reconfigure).
    • [ ] Enable public registration?
  • [ ] Configure project server.
    • [ ] Install packages.
    • [ ] Identify Trac home directories.
    • [ ] Export data and www directories per project.
    • [ ] Create .htaccess for user account control.
    • [ ] Enable Trac plugins in each project.
    • [ ] Setup Apache config.
    • [ ] Setup index.cgi landing page.
  • [ ] Enable Tor.
    • [ ] Install server packages on server.
    • [ ] Install client packages on clients.
      • [ ] Setup TorButton.
    • [ ] Configure your primary system as an End-Node, or
    • [ ] Configure your plug-server as an End-Node, or
      • [ ] Configure Tor for local-network routing.
      • [ ] Configure Polipo.
    • [ ] Configure either system as a Relay.
      • [ ] Configure Tor for Relaying.
      • [ ] Decide on and configure your ExitPolicy.
      • [ ] Follow the minimal harassment tips.
  • [ ] Configure BitTorrent Tracker.
    • [ ] Install packages.
    • [ ] Make a BitTorrent user.
    • [ ] Setup BitTorrent as a system service.
    • [ ] Setup Apache config.
  • [ ] SSL Connections
    • [ ] Stop trusting SSL.
  • [ ] Reconfigure firewall.

Ongoing Maintenance

This is probably the most important section: how to continue handling your system when it's up. First, we should be off to a pretty good start. Everything's just about as locked down as it can be, while the auto-updater should keep everything secure going forward. Next, acquaint yourself with the standard troubleshooting and monitoring tools (also on archive.org), especially nmap, netstat, htop, iotop, and nettop. Also, ps, who, grep, and sed. And a scripting language to make common tasks easier. You'll especially want to make sure you read the Debian Securing Manual and Debian Administration.

GnuPG (GPG/OpenPGP)

To clarify the following section's terms: GnuPG and PGP are different implementations of the OpenPGP standard. We're only concerned about OpenPGP (the standard) and GnuPG (the implementation).

I'm half tempted to put this in the above, system configuration, section because it's just that important. However, I can't, because OpenPGP isn't something you can do alone. OpenPGP is based on a web of trust security model and, trust me, being a web of one is both unhelpful and boring.

A web of trust lets you know how much you should trust someone's self-identification, someone's claim that they are who they say they are. You rely on your friends' opinions when deciding whether to trust someone: "Are you who you say you are? Should I believe you? Well, let me check how many of my friends believe you and get back to you." This system comes with all the inherent flaws you can imagine in such a decentralized trust-based system: keys might be compromised, people might introduce themselves to the web in bad faith, etc. However, when the web is closely vetted, it's almost invaluable for identification and authentication.

As far as implementing GnuPG goes, you'll want to read everything GnuPG has for their documentation section, especially the GNU Privacy Handbook. You'll want to follow the instructions for your specific GnuPG package. I'd recommend a front-end like GPA or Seahorse. However you do it, the basic steps involve:

  1. Creating a public-private key pair.
  2. Securing a copy of your private key somewhere safe.
  3. Pointing people to your public key.
  4. Accepting and trusting other peoples' keys.
  5. Signing and reading messages with your key.

Trust vs. Signing

What's the difference between trusting a key and signing a key? Why would you do one or the other?

Signing a key says that you've verified the key belongs to its owner. The usual process is to sign the key and send it back to its owner for publication.

How much you trust a key is a rating of how much you trust the other person's signatures - how selective and careful they are about signing other peoples' keys. If you don't know, marginal trust is alright. If you're sure they're very selective about signing keys, full trust might be alright. And, if they're ready to sign anything, non-trust might very well be appropriate.

Future Directions

Where to go next with this? I'm thinking of adding GNU MediaGoblin and OStatus.

OStatus seems awesome, in that it effectively allows any "feed" of data to become social, post-data. There's even an OStatus plugin in development for WordPress. Presumably, you should be able to optionally link all your OStatus feeds in one place to present a unified feed somehow... Otherwise, I've just named three conceptually separate OStatus feeds, which kind of seems to defeat the purpose.

I'd also love to see mesh network support which is already available in the Linux 2.6 kernel. Now we just need some decent method to enable and configure it.

Thanks

Thanks to the Freedombox-discuss mailing list for reading through this, and especially to John Gilmore for clarifying a few of my technical blunders.

License

If this document is sufficiently creative to be copyrightable:

This document is copyrighted and copylefted: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this document. If not, see http://www.gnu.org/licenses/.