Sunday, March 6, 2016

Hunting Exploit Kits Abusing Domain Generator Algorithm

Exploit Kits (EK) are not something new at all. This set of malicious tools are being investigated intensively by many security researchers at the moment on account that Threat Actors are using them massively. Luis Rocha is one of those researches and he has written about it in his blog. In this post, Luis explained how a specific EK, Neutrino, works:

  • "User browses to the compromised web server.
  • Web server contacts the backend infrastructure in order perform various check and to generate malicious java script code. These checks include things like verification of victim IP address and its Geo-location. Furthermore within the malicious JavaScript code there are new domain names and URLs that are generated dynamically by the backend.
  • The browser processes and decodes the malicious JS. In the observed infection the malicious JavaScript checks the browser version and if it matches the desired version, it stores a cookie and processes a HTML iframe tag.
  • The iframe tag triggers the browser to perform a request to another URL which is the Neutrino Exploit Kit landing page.
  • The landing page is hosted in a randomly generated host using DGA which needs to be resolved via DNS. The authoritative domain to answer these domains are owned by the threat actor. The answers received by the DNS server have a time to live (TTL) of a few seconds. The domains are registered on freely available country code top level domains (ccTLD).
  • The victim then lands in the exploit kit landing page which by its turn delivers a small HTML page with an object tag defined in its body. This object tag directs the browser to load Adobe Flash Player and then use it to play the SWF file specified in the URL. In case the victim does not have Adobe Flash player installed, the browser is instructed to download it.
  • The browser as instructed by the object tag, downloads the malicious Flash file.
  • The obfuscated and encrypted SWF file is played by the Flash Player and exploits are triggered based on available vulnerabilities. The Flash file contains exploits for CVE-2013-2551, CVE-2014-6332, CVE-2015-2419 affecting Internet Explorer and CVE-2014-0569, CVE-2015-7645 affecting Adobe Flash.
  • If the exploitation is successful, shellcode is executed and the malware is downloaded and launched. In this case we observed that the malware delivered has been CryptoWall.
The threat actors behind Neutrino are finding vulnerable websites in order to host their malicious JS  content globally in a repeatable and automated fashion. Furthermore, In the last few days Neutrino has been abusing the registration of free domains registered inside the country code top level domains (ccTLD) such as  .top, .pw, .xyz, .ml, .space and others. The different landing pages have been pointing to a server hosted in Germany and in another cases in Netherlands. In another blog post I will go into more details about it."

The detection of EK is a challenge by nature. For example, a few days ago, in Talosblog, they explained the set of changes they have detected in latest version of Angler EK, affecting the URI used by the landing page.

Malicious domains time of live

Luis Rocha mentioned in his blog that the landing page is hosted in randomly generated host using DGA. This domains are registered on freely available country top level domains (ccTLD).

Basically, the lifecycle of domains used for malicious purposes is usually quite short. The domain is registered to be used during a short timeframe for a specific campaign, until the domain is detected as malicious and it is cancelled and/or included in a blacklist.  Then, another domain is created following the same cycle again.

This information can be used to hunt in our logs; any domain recently created is worth to investigate. Obviously this is not the silver bullet as it is possible that some specific EK are not using a recent created domain as landing page, or they are using an IP instead. Moreover, it might happen that some good domain has been created recently, which will generated false positives. But in many situations this approach will help to catch EK or malware using DGA.

Searching domains with Splunk

In an enterprise environment with a Splunk setup the logs might have different formats and fields, depending on the technology used to gather the logs (proxy, network tap, HTTP server, etc). In the case of this analysis, the logs are dumped from the network traffic (pcap files) directly into Splunk, once processed by 'tshark'. Tshark extracts the relevant fields for the analysis: frame_time, http_host, http_request_method, http_request_uri, http_response_code, http_reponse_phrase, http_user_agent, http_server, ip_dst, ip_src

In the traffic I have included pcap from EK obtained from

The Splunk screenshot below shows an example of some traffic already processed and imported

As mentioned previously the fields will be different in each setup, depending on the information stored. However, the same principle and analysis will apply, just the fields to extract will be different

First thing to do is to search the HTTP requests in order to extract the domain of the URL request. Bare in mind that a URL can be composed of subdomain*.domain.  The subdomain.domain of the URL is stored in a field named "http_host" as can be seen in the screenshot above, hence to extract only the 1st level domain, I can use  'sed'. Also, I renamed the http_host field to dns.

The final query is as follow:

index=networktraffic GET OR POST |rename http_host as dns| rex field=dns mode=sed "s/.*\.(.*\..*)/\1/g" | table dns | sort dns | uniq

The output is a table with all the 1st level domains.

Whois base on DNS: creating my own whois search

One of the things I miss in Splunk is the possibility to perform whois searches base on domains, the same way there are apps to perform searches base on IP. Maybe it does exist, but I did not find it, hence I have created my own domain search

To create a customize search in splunk, you need two things: 1) declare the new search, 2) implement the script 

To declare the new search is done in  '/opt/splunk/etc/system/local/commands.conf'. For the case of this analysis I create the content below:

[whoisdns] # name of the search 
FILENAME = #script in python to perform the search
supports_rawargs = true 
requiered_fields = dns # name of the field which will be used for the search
streaming = true

The script must be located in /opt/splunk/etc/searchscripts/ with name The script I have created runs the 'whois' command in order to extract two main items: name of domain and the creation date of the domain.


import os
import sys
import splunk.Intersplunk

results,unused1,settings = splunk.Intersplunk.getOrganizedResults()

os.system('rm /tmp/file.txt')
os.system('echo resolution > /tmp/file.txt')
for r in results:
 v = r['dns']
 cmdstring='whois "%s" |egrep  -e "Domain Name:|Creation Date:|Domain Registration Date:|Created On|Domain Create Date" |tr "\n" " "  >>/tmp/file.txt' %(v)
 os.system('echo  >>/tmp/file.txt')
        z = os.system(cmdstring)

os.system('cat /tmp/file.txt')

Checking DNS domains with 'whoisdns'

Now, I can pass all the first level domains obtained in the query executed previously to the whoisdns search command.

In the end, I format the output and create a table with two columns: the domain name (domain) and the date of creation (creationdate).  This is splunk query executed:

index=networktraffic GET OR POST |rename http_host as dns| rex field=dns mode=sed "s/.*\.(.*\..*)/\1/g" | table dns | sort dns| uniq | whoisdns | rex field=resolution  ".*:\s*(?<domain>\w*\.\w*\s+)[\w|\s]+:\s*(?<creationdate>.*)" | table domain,creationdate | dedup domain | sort domain | table domain,creationdate

And the results (138 items):

In order to be able to use the data obtained in other searches, I keep all the data in a lookup table. Doing this, I can match any search against the data on this table.

The command executed is as follow:

index=networktraffic GET OR POST |rename http_host as dns| rex field=dns mode=sed "s/.*\.(.*\..*)/\1/g" | table dns | sort dns| uniq | whoisdns | rex field=resolution  ".*:\s*(?<domain>\w*\.\w*\s+)[\w|\s]+:\s*(?<creationdate>.*)" | table domain,creationdate | dedup domain | sort domain | table domain,creationdate | outputlookup alldomains.csv

Searching the domains recently created

In the file "alldomains.csv"  I have the full list of domains and the registration date. 
I can now search for the ones created in 2016, which are the ones I am interested on. The query in Splunk to search in a lookup table is very simple:

|inputlookup alldomains.csv | search creationdate=*2016*

From the output, I see there are domains created a few days ago. Some of this domains have very similar name.

Once I have the list of suspicious domains, I can start checking the traffic generated towards those domains with Splunk.  

From the screenshot above I see in some of the URI the pattern "search/?keyword=" which matches the Angler EK landing page, as described in Talos blog

Happy hunting!