1. Sultan Imanhodjaev
  2. django-monitor

Wiki

Clone wiki

django-monitor / Home

Introduction

Django-Monitor was developed to monitor a collection of websites for possible blocking and/or defacement, and also contains functionality to scan those websites using security tools such as NMap and (soon) Metasploit for possible security vulnerabilities.

Apps in Django-Monitor

Htmlgrab: This is the core application of django-monitor, which is mainly concerned with site scraping and extracting interesting tags.

NMapScans: This app can be used to port scan the sites, with all of the options of nmap.

(IN DEVELOPMENT) MetasploitScans: This app can be used to scan the sites using the Metasploit framework.

Basic Usage: Celery Jobs

The admin side of the application provides an interface to schedule scans. From the admin page under "djcelery", select "Periodic tasks". The jobs available are:

html-grab-job: Directly request the HTML over HTTP(S) html-grab-job-proxy: Request the HTML over a proxy nmap-scan: Perform a port scan

For html-grab-job and html-grab-job-proxy, the JSON positional arguments (under the "Arguments" tab) work as follows

["all"] will scan all links in the database. In html-grab-proxy, the proxy name needs to be specified next, ex. ["all", "test_proxy"]

["these", <list of sites in double quotes>] will scan a specific subset of sites. Be sure to specify the sites with the title field of the Link object in the database. In html-grab-job-proxy, the name of the proxy is the second argument specified, followed by the list of sites.

Finally, substituting a number for "all" or "these" will scan that number of sites retrieved from the database, e.g. [3]

For nmap-scan, only the name of the NMapScan object to be used (which contains the ports to scan, the command line arguments, and the list of links to scan), should be specified.

Basic Usage: CLI

An arbitrary website can be scanned (and its results stored in the database) using the htmlgrab_cli.py script.

Usage: >python htmlgrab_cli.py <name of site> -p <proxy name>

If the URL does not exist in the database, it will be stored as a new Link object in the database.

REST API for results

We provide a simple REST API to view results from the htmlgrab and nmap scans. The URLs involved are

api/results/site/url/<url of the site scanned as represented by the title field of the site's Link object>/ will return a collection of JSON objects for each result corresponding to the selected site. Example: /api/results/site/url/24.kg/

The fields returned are as follows. All HTML and associated tags are base64 encoded. --------------------------------------------------- link_uri: The URL of the link scanned, which includes the destination port number.

ipaddrlist: The list of IP addresses associated with the link.

timestamp: the date and time of the scan

html: The HTML returned from the request.

jscript_lst: The contents of any externally linked Javascript.

img_tags: Any IMG tags collected during the scan.

iframe_tags: Any IFRAME tags collected during the scan.

object_tags: Any OBJECT tags collected during the scan.

encoding: The character encoding of the site scanned.

header: The HTTP(S) header received from the initial request.

errors: Any errors that occured during the scan

hash: a hash of the result objects ---------------------------------------------

/apu/results/site/hash/<hash>/ will return the result object corresponding to the hash.

/api/results/nmap/hash/<hash>/ will return the NMap scann result corresponding to the specified hash

Updated