Snippets

Alexander Hanel The Fundamentals of Sharing for Malware Analyst

Created by Alexander Hanel last modified

The Fundamentals of Sharing for Malware Analyst

In most organizations malware analysts are tasked to produce a deliverable derived from static or dynamic analysis. The deliverable could be to extract indicators, understand functionality, write a report or something similar. During this process the analyst will create a number of files and artifacts. These files could be IDBs, memory dumps, yara signature, decoder scripts, pcaps, notes, etc. Once the task has been completed the analyst submits their deliverable and then moves on. In many organizations the files and artifacts are not stored in a way that are accessible to others, which is a shame. Having the data and analysis accessible to others has many positive benefits.

  1. Promotes sharing of processes and knowledge between analyst.
  2. Removes duplication of labor by allowing analyst to build off of previous research and analysis.
  3. Intellectual property and artifacts are not lost when an analyst leaves the organization.
  4. Collaboration is not dependent on email or instant message.

The key items that are needed for sharing are the following

  • Storage
  • Documentation
  • Processes
  • Historical Data.

Storage

The storage can be a Linux server. Accessible via an SSH client. Having it segregated from the internet but accessible from research and corporate environments is preferable. The systems primary purpose is storage and command line analysis. This will vary for each organization but some recommend folders to start off with could be tickets, cve and families.

$ ls
cve/  families/  tickets/

The tickets folder will contain sub-folders with the number of the ticket from your documentation (i'll cover this in a minute).

$ ls
1/  2/  3/  4/  5/  6/

Each folder will contain the analysis artifacts such a binaries, pcaps, etc. If multiple people are working the same ticket then creating a folder with their initials as the name within the working directory will work. The families folder should contain be organized by the family name.

$ ls
cerber/  dridex/  locky/  zeus-kins/

Sub-folders can be either by dates, soft links to tickets folders or by date. I have found using the European format of YearMonthDay (example: 20160911) to be the best option. It allows for easily sorting by folder name rather than the creation date. The cve folder should be listed by the cve number.

$ ls
cve-2014-1761/  cve-2016-0010/  cve-2016-4117/

The folders should store everything from an analysis. Unless needed, full memory dumps might be overkill. I prefer working with process dumps. I like having them around in case I need to search the system for a binary with a certain keyword or Yara signature match.

Documentation

When it comes to documentation there are typically two types of pages. The most common is the child page. This is typically an analysis of an individual sample or artifacts related to an incident. These pages should be template based that contain needed indicators (hash, IPs, URLs, family, etc), analysis notes ( unpacking, decryption of strings, third party analysis, etc) and classification. Child tickets are automatically created and have a unique ID that can be used as the folder name for storage. The parent pages are reserved for malware families, threat groups, etc. Child pages are linked to parent families. This simple parent child relationship between tickets is extremely valuable. This can allow the tracking of campaigns or malware families across multiple incidents or ticket.

Requirements for the documentation editor is it needs to be web based, accessible from corporate and research networks, searchable, not SharePoint, have the ability for parent and child relationships, roles based and edits can be tracked. I have yet to see a perfect solution that wasn't custom. I have seen Git+MarkDown, Google Pages, Wiki, RT and Confluence. There are third-party applications that advertise as intel platforms but I would not recommend them due to leakage concerns. Note: Do not store binary data in documentation.

Processes

Documentation isn't sexy. Plain and simple. Most people do not want to save their files to a server and log their analysis. This is where the process part comes into play. There must be a defined process about the expectations for documentation. Everyone on the team should be expected to log their analysis and everyone on the team should be able to read it. A nice side effect is this process promotes a culture around sharing and learning from others. If multiple teams (such as incident response, SOC or intel) are all doing malware analysis then they should all be following the same process and logging to the same documentation. If their is a concern about data leakage due to junior analyst then pages can be marked as do not distribute or blocked using roles or fire the people you can't trust.

Historical Data

Having access to historical data of attacks is one of the most powerful tools when doing malware analysis. Successful security companies and defenders keep historical data. They do analysis of current attacks and cross reference it with historical data. This allows for seeing trends, missed detection, TTPs (Tactics, Techniques, and Procedures) and predict what could happen next. Historical data could be all malicious emails, automated malware analysis reports (LastLine for malware & Cuckoo for exploit kits), full packet capture, MISP, etc.

Comments (0)

HTTPS SSH

You can clone a snippet to your computer for local editing. Learn more.