Bitbucket is a code hosting site with unlimited public and private repositories. We're also free for small teams!

Close
== Requirements ==

* JDK 6+
* GlassFish 3.1.1
* PostgreSQL 9+

== Compiling ==

* Requirements:
 * Apache Ant 1.8+
 * Apache Ivy 2.2  


1. Retrieve the project dependencies:
ant retrieve

2. There are additional dependencies that currently cannot be downloaded using Ivy and must be placed into the directory
'lib2/' manually.

 1. Download http://jdbc.postgresql.org/download/postgresql-9.1-902.jdbc4.jar
 2. Download https://bitbucket.org/gyoergy/hpfeeds-java into some other directory and compile it by running
 'mvn package'. Copy the resulting .jar 'target/hpfeeds-java-0.1.0.jar'.

3. The file 'conf/default.conf' contains the default compile time configuration. Copy it to e.g. 'conf/local.conf' and
customize the values. ('conf/local.conf' is in .gitignore.)

4. Finally, build the modules by running e.g. 'ant -Dconfig.file=conf/local.conf'. The modules are placed into target/modules.


== Modules ==

The current modules are:

hpfeedsra - resource adapter providing interfacing with hpfeeds for modules
hpfeeds - hpfeeds submission handler intended for the channels dionaea.capture and mwbinary.dionaea.sensorunique
virustotal - retrieves VirusTotal reports for binaries
shadowserver_asn - performs ASN lookup at Shadowserver for IPs
shadowserver_geoip - performs Geo-IP lookup at Shadowserver
stats - recurring process that maintains aggregate tables over the data set (runs every 2 s)

== Initial Setup ==

1. Create the directory structure similar to the following one:
E.g.:

mkdir /tmp/hbbackend/
mkdir /tmp/hbbackend/main/ # the main storage directory for binaries and corresponding files ('main.storagedir')
mkdir /tmp/hbbackend/geoip/ # for geo IP databases
mkdir /tmp/hbbackend/log/ # logs
mkdir /tmp/hbbackend/conf/ # for run time configuration files (logger configuration)
mkdir /tmp/hbbackend/xadisk/ # XADisk working directory

2. Create two database users 'hbbackend' and 'hbstats'. Create a database named 'hbbackend' which should be owned
by 'hbbackend'. For each user, create a schema that has the same name as the user and owned by it, and instantiate
the schemas by executing the following scripts. Finally, grant read access to the schema 'hbbackend' and its tables
for 'hbstats'.
E.g.:

# after schema hbbackend created
psql -U hbbackend < schema/hbbackend.sql
psql -U hbbackend < schema/hbbackend_functions.sql
# grant read access to hbstats
psql -U hbbackend -c -c "grant usage on schema hbbackend to hbstats;"
psql -U hbbackend -c "grant select on all tables in schema hbbackend to hbstats;"

# after schema hbstats created
psql -U hbstats -d hbbackend < schema/hbstats.sql
psql -U hbstats -d hbbackend < schema/hbstats_functions.sql

3. Download and uncompress the MaxMind GeoLite City database into the appropriate location. E.g.:

cd /tmp/hbbackend/geoip/
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz

4. Create the GlassFish domain. Throughout the following steps, the name 'hbbackend' and the port base 9900 will be
used. E.g.:

asadmin create-domain --portbase 9900 hbbackend

5. Copy the dependencies into the domain library directory. E.g.:

cp lib2/postgresql-9.1-902.jdbc4.jar \
   lib2/hpfeeds-java-0.1.0.jar \
   lib/xadisk-*.jar \
   lib/slf4j-api-1.6.*.jar \
   lib/logback-core-1.0.*.jar \
   lib/logback-classic-1.0.*.jar\
   lib/concurrentlinkedhashmap-lru-1.2.jar \
<path>/glassfish3/glassfish/domains/hbbackend/lib/

6. Copy the Logback configuration file conf/logback.xml to e.g. /tmp/hbbackend/conf/ and customize the property
'LOG_DIR' to point to the appropriate directory for the logs. E.g.:

cp conf/logback.xml /tmp/hbbackend/conf/

7. Start 'asadmin' with the appropriate port (e.g. 'asadmin --port 9948') and run the following commands to set up the
domain including JMS resources. All commands refer to example values (paths) from above, so it will probably be
necessary to edit them too before executing. The database passwords are assumed to be the same as the user names. 
The hpfeeds configuration for hpfeedsra needs to be filled in.

# start
start-domain hbbackend

## postgres
# connection pool
create-jdbc-connection-pool --datasourceclassname org.postgresql.xa.PGXADataSource --restype javax.sql.XADataSource --property user=hbbackend:password=hbbackend:databaseName=hbbackend:serverName=localhost:port=5432 PgPool_hbbackend
ping-connection-pool PgPool_hbbackend
# jdbc resource
create-jdbc-resource --connectionpoolid PgPool_hbbackend jdbc/hbbackend

# connection pool
create-jdbc-connection-pool --datasourceclassname org.postgresql.xa.PGXADataSource --restype javax.sql.XADataSource --property user=hbstats:password=hbstats:databaseName=hbbackend:serverName=localhost:port=5432 --steadypoolsize 1 --maxpoolsize 4 PgPool_hbstats
ping-connection-pool PgPool_hbstats
# jdbc resource
create-jdbc-resource --connectionpoolid PgPool_hbstats jdbc/hbbackend

## thread pools
create-threadpool --minthreadpoolsize=5 --maxthreadpoolsize=50 xadisk-thread-pool
create-threadpool --minthreadpoolsize=16 --maxthreadpoolsize=16 hpfeedsra-thread-pool
# must restart domain for thread pools to become available
restart-domain hbbackend

## xadisk
create-resource-adapter-config --threadpoolid xadisk-thread-pool --property xaDiskHome=/tmp/hbbackend/xadisk:instanceId=hbbackend xadisk
deploy --name xadisk lib/xadisk-1.2.1.rar
create-connector-connection-pool --raname xadisk --connectiondefinition org.xadisk.connector.outbound.XADiskConnectionFactory --property instanceId=hbbackend --transactionsupport XATransaction xadisk/ConnectionFactory
ping-connection-pool xadisk/ConnectionFactory
create-connector-resource --poolname xadisk/ConnectionFactory xadisk/ConnectionFactory

## jms
# stomp bridge
set configs.config.server-config.jms-service.jms-host.default_JMS_host.property.imq\\.bridge\\.enabled=true
set configs.config.server-config.jms-service.jms-host.default_JMS_host.property.imq\\.bridge\\.activelist=stomp
set configs.config.server-config.jms-service.jms-host.default_JMS_host.property.imq\\.bridge\\.admin\\.user=admin
set configs.config.server-config.jms-service.jms-host.default_JMS_host.property.imq\\.bridge\\.admin\\.password=admin
set configs.config.server-config.jms-service.jms-host.default_JMS_host.property.imq\\.bridge\\.stomp\\.tcp\\.port=9972
# disable autocreate
set configs.config.server-config.jms-service.jms-host.default_JMS_host.property.imq\\.autocreate\\.queue=false
set configs.config.server-config.jms-service.jms-host.default_JMS_host.property.imq\\.autocreate\\.topic=false
# destinations
create-jmsdest --desttype topic new_attack
create-jmsdest --desttype topic new_binary
create-jmsdest --desttype topic new_binary_stored
create-jmsdest --desttype topic new_ip
create-jms-resource --restype javax.jms.Topic --property Name=new_attack jms/new_attack
create-jms-resource --restype javax.jms.Topic --property Name=new_binary jms/new_binary
create-jms-resource --restype javax.jms.Topic --property Name=new_binary_stored jms/new_binary_stored
create-jms-resource --restype javax.jms.Topic --property Name=new_ip jms/new_ip
# connection factory
create-jms-resource --restype javax.jms.ConnectionFactory jms/ConnectionFactory
create-jms-resource --restype javax.jms.ConnectionFactory --property ClientId=virustotal jms/DurableConsumer/virustotal
create-jms-resource --restype javax.jms.ConnectionFactory --property ClientId=shadowserver_asn jms/DurableConsumer/shadowserver_asn
create-jms-resource --restype javax.jms.ConnectionFactory --property ClientId=shadowserver_geoip jms/DurableConsumer/shadowserver_geoip
ping-connection-pool jms/ConnectionFactory

# logback config location
create-jvm-options -Dlogback.configurationFile=/tmp/hbbackend/conf/logback.xml

# monitoring
enable-monitoring --modules connector-connection-pool=HIGH:connector-service=HIGH:deployment=HIGH:ejb-container=HIGH:http-service=HIGH:jdbc-connection-pool=HIGH:jms-service=HIGH:jvm=HIGH:thread-pool=HIGH:transaction-service=HIGH:jms-service=HIGH:web-container=HIGH

# hpfeeds config
create-resource-adapter-config --threadpoolid hpfeedsra-thread-pool --property host=hpfeeds.honeycloud.net:port=10000:ident=IDENT:secret=SECRET:channels=dionaea.capture,mwbinary.dionaea.sensorunique  org.honeynet.hbbackend.hpfeedsra

# finally, restart once again for a clean start
restart-domain hbbackend


== Deploying Modules ==

Generally, the order of deployment should be the opposite of the flow of the system. Consumers should be deployed before
producers to avoid losing messages. (For durable consumers, it's only critical at the first deployment, but following
the rule is always a safe bet.) The exception is hpfeedsra as it listens for hpfeeds worker modules, so it needs to
already be resident when those are deployed.

E.g.:

deploy target/modules/org.honeynet.hbbackend.hpfeedsra.rar
deploy target/modules/org.honeynet.hbbackend.hpfeeds.jar
deploy target/modules/org.honeynet.hbbackend.virustotal.jar
deploy target/modules/org.honeynet.hbbackend.shadowserver_asn.jar

NOTE: currently, the code is scaled back, so there's only hpfeeds submission handling and no consumers mentioned above.


== Undeploying Modules ==

The order should be the opposite of the order of the deployment with the exception that this time, it needs to be
considered that hpfeeds worker modules are consumers of hpfeedsra, which means it's safest to undeploy hpfeedsra first.

E.g.:

undeploy org.honeynet.hbbackend.shadowserver_asn
undeploy org.honeynet.hbbackend.virustotal
undeploy org.honeynet.hbbackend.hpfeedsra
undeploy org.honeynet.hbbackend.hpfeeds

NOTE: currently, hpfeedsra must always be undeployed before hpfeeds, as disconnecting hpfeeds worker modules gracefully is
work in progress.

Recent activity

Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.