Clone wiki

RAWR / Usage

Home | Installation | Usage | Community | FAQ


  • Using Historical Scan Data
    • <file, csv list of files, or directory>
      • It will parse the following formats:
        • NMap - XML (requires -sV)
        • PowerSploit - XML portscan output
        • Nessus - XML v2 (requires "Service Detection" plugin)
        • Metasploit - CSV
        • Qualys - Port Services Report CSV
        • Qualys - Asset Search XML (requires QIDs 86000,86001,86002)
        • Nexpose - Simple XML, XML, XML v2
        • OpenVAS - XML

  • Using NMap
    • RAWR accepts valid NMap input strings (CIDR, etc) as an argument
      • <file> can be used to feed it a line-delimited list containing NMap ranges or hostnames.
    • use -t <timing> and/or -s <source port>
    • use -p <port|all|fuzzdb> to specify port #(s), all for 1-65353, or fuzzdb to use the FuzzDB Common Ports
    • --udp will add -sU to the nmap command, which checks both TCP and UDP during scan.
    • --ssl will call enum-ciphers.nse for more in-depth SSL data.

  • Using a prior scan configuration
    • <RAWR .cfg file>
      • .cfg files containing that scan's settings are created for every run.
      • Any settings specified in the commandline will override that which is in the .cfg file.
      • This allows you to run the same scan without building the commandline each time.


  • In [conf/], 'flist' defines the fields that will be in the CSV as well as the report.
    • The section at the bottom - "DISABLED COLUMNS" is a list of interesting data points that are not shown by default.

  • --rd attempt to screenshot RDP and VNC interfaces during enumeration.

  • --dns will have it query Bing for other hostnames and add them to the queue.
    • (Planned) If IP is non-routable, RAWR will request an AXFR using 'dig'
    • This is for external resources - non-routables are skipped.
    • Results are cached for the duration of the scan to prevent unneeded calls.

  • -o, -r, and -x make additional calls to grab HTTP OPTIONS, robots.txt, and crossdomain.xml, respectively

  • Try --downgrade to make requests with HTTP/1.0
    • Possible to glean more info from the 'chattier' version
    • Screenshots are still made via HTTP/1.1, so expect that when viewing the traffic.

  • --noss will omit the collection of screenshots
    • The HTML report still functions, but will show the '!' image for all hosts.

  • Proxy your requests with --proxy=<[username:password@]ip:port[+type] | file>.
    • Types are socks, http, basic, and digest.
    • File should contain proxy info on one line - [username:password@]ip:port[+type]
    • Add --proxy-auth to have RAWR prompt for creds at runtime.
    • Example - username:password@
    • This works well with BurpSuite, Zap, or W3aF.

  • Crawl the site with --spider, notating files and docs in the log directory's 'maps' folder.
    • Defaults: [conf/] follow subdomains, 3 links deep, timeout at 3min, limit to 300 urls
    • If graphviz and python-graphviz are installed, it will create a PNG diagram of each site that is crawled.
    • Start small and make adjustments outward in respect to your scanning environment. Please use caution to avoid trouble. :)

  • Use -S <1-5> to apply one of the crawl intensity presets. The default is 3.

  • --mirror is the same as --spider, but will also make a copy of each site during the crawl.

  • Use --spider-opts <opts> to define crawl settings on the fly.
    • 's' = 'follow subdomains', 'd' = depth, 't' = timeout, 'l' = url limit
    • Not all are required, nor do they have to be in any particular order.
    • Example: --spider-opts s:false,d:2,l:500

  • Also for spidering, --alt-domains <domains> will whitelist domains you want to follow during the crawl.
    • By default, it won't leave the originating domain.
    • Example: --alt-domains,,
    • --blacklist-urls <input list> will blacklist domains you don't want to crawl.

  • --useragent <string|file> sets a custom user-agent.
    • Pulls from the supplied string or makes web calls with each of the supplied useragents in the supplied list of useragent strings.
    • The default is defined in
    • Warning: This will exponentially increase the number of web calls and returned interfaces!


  • -a is used to include all open ports in the CSV output and Threat Matrix.

  • -m will create the Threat Matrix from provided input and exit (no web calls).

  • -d <folder> changes the log folder's location from the default "./"
    • Example: -d ./Desktop/RAWR_scans_20140227 will create that folder and use it as your log dir.

  • -q or --quiet mutes display of the dinosaur on run.
    • Still in disbelief that anyone would want this... made 2 switches for it, to show that I'm a good sport. :)

  • Compress the log folder when the scan is complete with -z.

  • --json and --json-min are the automation-friendly outputs from RAWR.
    • --json only kicks out JSON lines to STDOUT, while still creating all of the normal output files.
    • --json-min creates no output files, only JSON strings to STDOUT

  • Use --notify [email address] to send an email or SMS notification via sendmail when scan is complete. (Requires configuration in conf/

  • Use --parsertest if you're testing a custom parser. It parses input, displays the first 3 lines, and quits.

  • -v makes output verbose.

Report Customization

  • -e excludes the 'Default password suggestions' from your output.
    • This was suggested as an 'Executive' option.

  • Give your HTML report a custom logo and title with --logo=<file> and --title=<title>.
    • The image will be copied into the report folder.
    • Click 'printable' in the HTML report to view the custom header.


  • -u runs update and prompts if a file is older than the current version.
    • Files downloaded are defpass.csv and Ip2Country.tar.gz.
    • It checks for phantomJS and will download after prompting.

  • -U runs update and downloads the files mentioned above regardless of their version, without prompting.