+Owen Nelson // firstname.lastname@example.org // 941.312.1865
+My current project, in its early planning/prototyping stages, is a DNS/IP
+management and audit platform.
++ Parsing/Importing Bind zone files and pulling them into the app database.
++ Editing zones via web interface.
++ Model of relationships between hosts, VMs, HWADDRs, IPs, A/AAAA records, CNAME records.
++ Treat all physical hosts, VMs, *and addresses* as assets to be tracked and leased.
++ Transparent storage of `IPy.IP` instances for IP addresses in the database.
+My most *traditional* django usage to date. Faculty create assignments for their
+courses, and students upload a digital artifact of the finished piece of
++ Django >=1.3 (uses class-based views).
++ Uses Compass/Sass (and blueprint) for stylesheet generation.
++ LDAP for authentication via `django-ldap`.
++ Custom per-object/per-user permissions management.
++ Syncs external registration data into the app database via `SQLAlchemy`.
+Snarc is a snort sensor (IDS, packet sniffer) dashboard.
+This project was developed and deployed internally. An open source version is
+publicly available as [django-clu](https://bitbucket.org/onelson/django-clu/) but
+is (if I remember) barely functional in its current state. It's been a low
+priority since internally everything is functional, but still it uses some
++ Daemon "consumer" watches external DB for new data, denormalizes and stores
+ internally for display and reporting.
++ Initially denormalized data was stored in MongoDB, now
+ stored in Postgres (along with the rest of the apps data).
++ New data can be reported the client via Node.JS and
+ Socket.io for real-time updates.
++ The [django-clu] version has since replaced the Node.JS socket server
+ implementation with gevent instead to allow app config info to be shared since
+ both the django and socket server are python.
++ New data is analysed as it is consumed, statistics/counts are
+ updated/incremented at this stage.
++ Celery+RabbitMQ are used for periodic tasks, such as daily report generation.
++ Supervisord is used to schedule and manage the supporting services (RabbitMQ,
+ Celery, Socket Server).
++ IP geo-location via GEOIP lookups - logged packet src/dst addresses mapped via
+ google maps (through their visualization APIs).
++ Graphs/charts of various collected stats generated with Googles visualization
+package covers both the client-side, and the sever (worker) side of Ringling's
+Render Farm pipeline. It inspects the files that are to be submitted and
+automagically fills in job parameters based on the values found to promote
+accuracy and reduce wasted cluster cycles. This pipeline replaced the
+linux-based pipeline associated with grender and gridmate (below).
++ Standalone scripts written for IronPython to interface with MS HPC .Net bindings (ported from C#).
++ Custom UI for job submission embedded in Autodesk Maya (uses pymel).
++ Custom pyQT UI for job submission for use with project archives made by Autodesk 3DS Max (architectural) Design.
++ Command-line Python 2.6 scripts used to prepare/cleanup nodes, as well as to steward the execution of each job on the cluster.
+A django-based job tracker and submission front-end (a web front-end for
++ RabbitMQ and celery to allow submissions to be processed in the background.
++ Job "staging" processed by a daemon which throttles based on system load.
++ Interfaced with SGE (Sun Grid Engine) via `subprocess.Popen` access to
+ command-line utilities to allow job control (pause/cancel).
++ Submitted jobs to the cluster via **grender**.
++ Provided access to job logs as they were generated.
++ Thumbnails of render output via ImageMagick (compiled to support exotic formats
+Pronounced *g-render* (for grid render). Was a command-line submission system
+animation students used to send Maya projects to our Sun Grid Engine cluster for
+This code existed before I took the position, but was effectively rewritten
++ Ran in `mayapy` (the command-line version of Autodesk Maya's embedded python
++ Inspected project files for missing assets, stale asset references, and job
+ parameters using Maya and RenderMan python API bindings.
++ "Plugin stack" system employed to allow us to add pass/fail pre-flight checks
+ before sending a job to the cluster.
+[django-trawler](https://bitbucket.org/onelson/django-trawler/) is a platform for
+launching phishing attacks as a means of auditing awareness. Currently barely
+better than a functional prototype. This was a simple little thing that was
+developed in-house for a specific purpose (doing a phishing awareness audit of
+Also see the [docs](http://django-trawler.readthedocs.org) over at readthedocs.org.
++ Uses Django's admin interface for operations.
++ Sends lots email via smtp.
++ Uses django templates for email content (allowing each email to be customized
+*Soon, I'll be adding a pure python mail server (`lamson`) with the app so it'll be as-close-to-zero-conf as possible.*
+I performed some simple maintenance on our existing C-based authentication
++ Introduced (PCRE) regex password policy checks triggered on password change.
++ Added functions to calculate levenshtein distance between the username and
+ password (to make sure they are not within a certain threshold of each other
+Simple wrapper to issue commands to a number of hosts in a cluster (not unlike
+fabric, but way more basic).
++ Employed process and thread pools to dispatch large numbers of tasks simultaneously.
++ Used subprocess module to `Popen` commands to the shell to kick off ssh sessions.