Friday, December 30, 2016

Updating Device Guard Code Integrity Policies

In previous posts about Device Guard, I spent a lot of time talking about initial code integrity (CI) configurations and bypasses. What I haven't covered until now however is an extremely important topic: how does one effectively install software and update CI policies according? In this post, I will walk you through how I got Chrome installed on my Surface Book running on an enforced Device Guard code integrity policy.

The first questions I posed to myself were:
  1. Should I place my system into audit mode, install the software, and base an updated policy on CodeIntegrity event log entries?
  2. Or should I install the software on a separate, non Device Guard protected system, analyze the file footprint, develop a policy based on the installed files, deploy, and test?
My preference is option #2 as I would prefer to not place a system back into audit mode if I can avoid it. That said, audit mode would yield the most accurate results as it would tell you exactly which binaries would have been blocked that you would want to base whitelist rules off of. In this case, there's no right or wrong answer. My decision to go with option #2 was to base my rules solely off binaries that execute post-installation, not during installation. My mantra with whitelisting is to be as restrictive as is reasonable.

So how did I go about beginning to enumerate the file footprint of Chrome?
  1. I opened Chrome, ran it as I usually would, and used PowerShell to enumerate loaded modules.
  2. I also happened to know that the Google updater runs as a scheduled task so I wanted to obtain the binaries executed via scheduled tasks as well.
I executed the following to get a rough sense of where Chrome files were installed:

(Get-Process -Name *Chrome*).Modules.FileName | Sort-Object -Unique
(Get-ScheduledTask -TaskName *Google*).Actions.Execute | Sort-Object -Unique

To my surprise and satisfaction, Google manages to house nearly all of its binaries in C:\Program Files (x86)\Google. This allows for a great starting point for building Chrome whitelist rules.

Next, I had to ask myself the following:
  1. Am I okay with whitelisting anything signed by Google?
  2. Do I only want to whitelist Chrome? i.e. All Chrome-related EXEs and all DLLs they rely upon.
  3. I will probably want want Chrome to be able to update itself without Device Guard getting in the way, right?
While I like the idea of whitelisting just Chrome, there are going to be some potential pitfalls. By whitelisting just Chrome, I would need to be aware of every EXE and DLL that Chrome requires to function. I can certainly do that but it would be a relatively work-intensive effort. With that list, I would then create whitelist rules using the FilePublisher file rule level. This would be great initially and it would potentially be the most restrictive strategy while allowing Chrome to update itself. The issue is that what happens when Google decides to include one or more additional DLLs in the software installation? Device Guard will block them and I will be forced to update my policy again. I'm all about applying a paranoid mindset to my policy but at the end of the day, I need to get work done other than constantly updating CI policies.

So the whitelist strategy I choose in this instance is to allow code signed by Google and to allow Chrome to update itself. This strategy equates to using the "Publisher" file rule level - "a combination of the PcaCertificate level (typically one certificate below the root) and the common name (CN) of the leaf certificate. This rule level allows organizations to trust a certificate from a major CA (such as Symantec), but only if the leaf certificate is from a specific company (such as Intel, for device drivers)."

I like the "Publisher" file rule level because it offers the most flexibility, longevity for a specific vendor's code signing certificate. If you look at the certificate chain for chrome.exe, you will see that the issuing PCA (i.e. the issuer above the leaf certificate) is Symantec. Obviously, we wouldn't want to whitelist all code signed by certs issued by Symantec but I'm okay allowing code signed by Google who received their certificate from Symantec.

Certificate chain for chrome.exe
So now I'm ready to create the first draft of my code integrity rules for Chrome.

I always start by creating a FilePublisher rule set for the binaries I want to whitelist because it allows me to associate what binaries are tied to their respective certificates.

$GooglePEs = Get-SystemDriver -ScanPath 'C:\Program Files (x86)\Google' -UserPEs
New-CIPolicy -FilePath Google_FilePub.xml -DriverFiles $GooglePEs -Level FilePublisher -UserPEs

What resulted was the following ruleset. Everything looked fine except for a single Microsoft rule generated which was associated with d3dcompiler_47.dll. I looked in my master rule policy and I already had this rule. Me being obsessive compulsive wanted a pristine ruleset including only Google rules. This is good practice anyway once you get in the habit of managing large whitelist rulesets. You'll want to keep separate policy XMLs for each whitelisting scenario you run into and then merge accordingly. After removing the MS binary from the list, what resulted was a much cleaner ruleset (Publisher applied this time) consisting of only two signer rules.

$OnlyGooglePEs = $GooglePEs | ? { -not $_.FriendlyName.EndsWith('d3dcompiler_47.dll') }
New-CIPolicy -FilePath Google_Publisher.xml -DriverFiles $OnlyGooglePEs -Level Publisher -UserPEs

So now, all I should need to do is merge the new rules into my master ruleset, redeploy, reboot, and if all works well, Chrome should install and execute without issue.

$MasterRuleXml = 'FinalPolicy.xml'
$ChromeRules = New-CIPolicyRule -DriverFiles $OnlyGooglePEs -Level Publisher
Merge-CIPolicy -OutputFilePath FinalPolicy_Merged.xml -PolicyPaths $MasterRuleXml -Rules $ChromeRules
ConvertFrom-CIPolicy -XmlFilePath .\FinalPolicy_Merged.xml -BinaryFilePath SIPolicy.p7b
# Finally, on the Device Guard system, replace the existing
# SIPolicy.p7b with the one that was just generated and reboot.

One thing I neglected to account for was the initial Chrome installer binary. I could have incorporated the binary into this process but I wanted to try my luck that Google used the same certificates to sign the installer binary. To my luck, they did and everything installed and executed perfectly. I would consider myself lucky in this case because I selected a software publisher (Google) who employs decent code signing practices.

Conclusion

In future blog posts, I will document my experiences deploying software that doesn't adhere to proper signing practices or doesn't even sign their code. Hopefully, the Google Chrome case study will, at a minimum, ease you into the process of updating code integrity policies for new software deployments.

The bottom line is that this isn't an easy process. Are there ways in which Microsoft could improve the code integrity policy generation/update/deployment/auditing experience? Absolutely! Even if they did though, the responsibility ultimately lies on you to make informed decisions about what software you trust and how you choose to enforce that trust!

Monday, November 28, 2016

Code Integrity on Nano Server: Tips/Gotchas

Although it's not explicitly called out as being supported in Microsoft documentation, it turns out that you can deploy a code integrity policy to Nano Server, enabling enforcement of user and kernel-mode code integrity. It is refreshing to know that code integrity is supported across all modern Windows operating systems now (Win 10 Enterprise, Win 10 IoT, and Server 2016 including Nano Server) despite the fact that Microsoft doesn't make that fact well known. Now, while it is possible to enforce code integrity on Nano Server, you should be aware of some of the caveats which I intend to enumerate in this post.

Code Integrity != Device Guard

Do note that until now, there has been no mention of Device Guard. This was intentional. Nano Server does not support Device Guard - only code integrity (CI), a subset of the supported Device Guard features. So what's the difference you ask?

  • There are no ConfigCI cmdlets. These cmdlets are what allow you to build code integrity policies. I'm not going to try to speculate around the rationale for not including them in Nano Server but I doubt you will ever see them. In order to build a policy, you will need to build it from a system that does have the ConfigCI cmdlets.
  • Because there are no ConfigCI cmdlets, you cannot use the -Audit parameter of Get-SystemDriver and New-CIPolicy to build a policy based on blocked binaries in the Microsoft-Windows-CodeIntegrity/Operational event log. If you want to do this (an extremely realistic scenario), you have to get comfortable pulling out and parsing blocked binary paths yourself using Get-WinEvent. When calling Get-WinEvent, you'll want to do so from an interactive PSSession rather than calling it from Invoke-Command. By default, event log properties don't get serialized and you need to access the properties to pull out file paths.
  • In order to scan files and parse Authenticode and catalog signatures, you will need to either copy the target files from a PSSession (i.e. Copy-Item -FromSession) or mount Nano Server partitions as a file share. You will need to do the same thing with the CatRoot directory - C:\Windows\System32\CatRoot. Fortunately, Get-SystemDriver and New-CIPolicy support explicit paths using the -ScanPath and -PathToCatroot parameters. It may not be obvious, but you have to build your rules off the Nano Server catalog signers, not some other system because your other system is unlikely to contain the hashes of binaries present on Nano Server.
  • There is no Device Guard WMI provider (ROOT\Microsoft\Windows\DeviceGuard). Without this WMI class, it is difficult to audit code integrity enforcement status at scale remotely.
  • There is no Microsoft-Windows-DeviceGuard/Operational event log so there is no log to indicate when a new CI policy was deployed. This event log is useful for alerting a defender to code integrity policy and virtualization-based security (VBS) configuration tampering.
  • Since Nano Server does not have Group Policy, there is no way to configure a centralized CI policy path, VBS settings, or Credential Guard settings. I still need to dig in further to see if any of these features are even supported in Nano Server. For example, I would really want Nano Server to support UEFI CI policy protection.
  • PowerShell is not placed into constrained language mode even with user-mode code integrity (UMCI) enforcement enabled. Despite PowerShell running on .NET Core, you still have a rich reflection API to interface with Win32 - i.e. gain arbitrary unsigned code execution. With PowerShell not in constrained language mode (it's in FullLanguage mode), this means that signature validation won't be enforced on your scripts. I tried turning on constrained language mode by setting the __PSLockdownPolicy system environment variable, but PowerShell Core doesn't seem to acknowledge it. Signature enforcement of scripts/modules in PowerShell is independent of Just Enough Administration (JEA) but you should also definitely consider using JEA in Nano Server to enforce locked down remote session configurations.

Well then what is supported on Nano Server? Not all is lost. You still get the following:

  • The Microsoft-Windows-CodeIntegrity/Operational event log so you can view which binaries were blocked per code policy.
  • You still deploy SIPolicy.p7b to C:\Windows\System32\CodeIntegrity. When SIPolicy.p7b is present in that directory, Nano Server will begin enforcing the rules after a reboot.

Configuration/deployment/debugging tips/tricks

I wanted to share with you the way in which I dealt with some of the headaches involved in configuring, deploying, and debugging issues associated with code integrity on Nano Server.

Event log parsing

Since you don't get the -Audit parameter in the Get-SystemDriver and New-CIPolicy cmdlets, if you choose to base your policy off audit logs, you will need to pull out blocked binary paths yourself. When in audit mode, binaries that would have been blocked generate EID 3076 events. The path of the binary is populated via the second event parameter. The paths need to be normalized and converted to a proper file path from the raw device path. Here is some sample code that I used to obtain the paths of blocked binaries from the event log:

$BlockedBinaries = Get-WinEvent -LogName 'Microsoft-Windows-CodeIntegrity/Operational' -FilterXPath '*[System[EventID=3076]]' | ForEach-Object {
    $UnnormalizedPath = $_.Properties[1].Value.ToLower()

    $NormalizedPath = $UnnormalizedPath

    if ($UnnormalizedPath.StartsWith('\device\harddiskvolume3')) {
        $NormalizedPath = $UnnormalizedPath.Replace('\device\harddiskvolume3', 'C:')
    } elseif ($UnnormalizedPath.StartsWith('system32')) {
        $NormalizedPath = $UnnormalizedPath.Replace('system32', 'C:\windows\system32')
    }

    $NormalizedPath
} | Sort-Object -Unique

Working through boot failures

There were times when the system often wouldn't boot because my kernel-mode rules were too strict when in enforcement mode. For example, when I neglected to add hal.dll to the whitelist, obviously, the OS wouldn't boot. While I worked through these problems, I would boot into the advanced boot options menu (by pressing F8) and disable driver signature enforcement for that session. This was an easy workaround to gain access to the system without having to boot from external WinPE media to redeploy a better, bootable CI policy. Note that the advanced boot menu is only made available to you if the "Enabled:Advanced Boot Options Menu" policy rule option is present in your CI policy. Obviously, disabling driver signature enforcement is a way to completely circumvent kernel-mode code integrity enforcement.

Completed code integrity policy

After going through many of the phases of an initial deny-all approach as described in my previous post on code integrity policy development, this is the relatively locked CI policy that I got to work on my Nano Server bare metal install (Intel NUC):

<?xml version="1.0" encoding="utf-8"?>
<SiPolicy xmlns="urn:schemas-microsoft-com:sipolicy">
  <VersionEx>1.0.0.0</VersionEx>
  <PolicyTypeID>{A244370E-44C9-4C06-B551-F6016E563076}</PolicyTypeID>
  <PlatformID>{2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}</PlatformID>
  <Rules>
    <Rule>
      <Option>Enabled:Unsigned System Integrity Policy</Option>
    </Rule>
    <Rule>
      <Option>Enabled:Advanced Boot Options Menu</Option>
    </Rule>
    <Rule>
      <Option>Enabled:UMCI</Option>
    </Rule>
    <Rule>
      <Option>Disabled:Flight Signing</Option>
    </Rule>
  </Rules>
  <!--EKUS-->
  <EKUs />
  <!--File Rules-->
  <FileRules>
    <!--This is the only non-OEM, 3rd party driver I needed for my Intel NUC-->
    <!--I was very specific with this driver rule but flexible with all other MS drivers.-->
    <FileAttrib ID="ID_FILEATTRIB_F_1" FriendlyName="e1d64x64.sys FileAttribute" FileName="e1d64x64.sys" MinimumFileVersion="12.15.22.3" />
  </FileRules>
  <!--Signers-->
  <Signers>
    <Signer ID="ID_SIGNER_F_1" Name="Intel External Basic Policy CA">
      <CertRoot Type="TBS" Value="53B052BA209C525233293274854B264BC0F68B73" />
      <CertPublisher Value="Intel(R) INTELNPG1" />
      <FileAttribRef RuleID="ID_FILEATTRIB_F_1" />
    </Signer>
    <Signer ID="ID_SIGNER_F_2" Name="Microsoft Windows Third Party Component CA 2012">
      <CertRoot Type="TBS" Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46" />
      <CertPublisher Value="Microsoft Windows Hardware Compatibility Publisher" />
      <FileAttribRef RuleID="ID_FILEATTRIB_F_1" />
    </Signer>
    <Signer ID="ID_SIGNER_S_3" Name="Microsoft Windows Production PCA 2011">
      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />
      <CertPublisher Value="Microsoft Windows" />
    </Signer>
    <Signer ID="ID_SIGNER_S_4" Name="Microsoft Code Signing PCA">
      <CertRoot Type="TBS" Value="27543A3F7612DE2261C7228321722402F63A07DE" />
      <CertPublisher Value="Microsoft Corporation" />
    </Signer>
    <Signer ID="ID_SIGNER_S_5" Name="Microsoft Code Signing PCA 2011">
      <CertRoot Type="TBS" Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E" />
      <CertPublisher Value="Microsoft Corporation" />
    </Signer>
    <Signer ID="ID_SIGNER_S_6" Name="Microsoft Windows Production PCA 2011">
      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />
      <CertPublisher Value="Microsoft Windows Publisher" />
    </Signer>
    <Signer ID="ID_SIGNER_S_2" Name="Microsoft Windows Production PCA 2011">
      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />
      <CertPublisher Value="Microsoft Windows" />
    </Signer>
    <Signer ID="ID_SIGNER_S_1" Name="Microsoft Code Signing PCA 2010">
      <CertRoot Type="TBS" Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195" />
    </Signer>
  </Signers>
  <!--Driver Signing Scenarios-->
  <SigningScenarios>
    <SigningScenario Value="131" ID="ID_SIGNINGSCENARIO_DRIVERS_1" FriendlyName="Kernel-mode rules">
      <ProductSigners>
        <AllowedSigners>
          <AllowedSigner SignerId="ID_SIGNER_S_1" />
          <AllowedSigner SignerId="ID_SIGNER_S_2" />
          <AllowedSigner SignerId="ID_SIGNER_F_1" />
          <AllowedSigner SignerId="ID_SIGNER_F_2" />
        </AllowedSigners>
      </ProductSigners>
    </SigningScenario>
    <SigningScenario Value="12" ID="ID_SIGNINGSCENARIO_WINDOWS" FriendlyName="User-mode rules">
      <ProductSigners>
        <AllowedSigners>
          <AllowedSigner SignerId="ID_SIGNER_S_3" />
          <AllowedSigner SignerId="ID_SIGNER_S_4" />
          <AllowedSigner SignerId="ID_SIGNER_S_5" />
          <AllowedSigner SignerId="ID_SIGNER_S_6" />
        </AllowedSigners>
      </ProductSigners>
    </SigningScenario>
  </SigningScenarios>
  <UpdatePolicySigners />
  <CiSigners>
    <CiSigner SignerId="ID_SIGNER_S_3" />
    <CiSigner SignerId="ID_SIGNER_S_4" />
    <CiSigner SignerId="ID_SIGNER_S_5" />
    <CiSigner SignerId="ID_SIGNER_S_6" />
  </CiSigners>
  <HvciOptions>0</HvciOptions>
</SiPolicy

I conducted the following phases to generate this policy:
  1. Generate a default, deny-all policy by calling New-CIPolicy on an empty directory. I also increased the size of the Microsoft-Windows-CodeIntegrity/Operational to 20 MB to account for the large number of 3076 events I would expect while deploying the policy in audit mode. I also just focused on drivers for this phase so I didn't initially include the "Enabled:UMCI" option. My approach moving forward will be to focus on just drivers and then user-mode rules so as to minimize unnecessary cross-pollination between rule sets.
  2. Reboot and start pulling out blocked driver paths from the event log. I wanted to use the WHQLFilePublisher rule for the drivers but apparently, none of them were WHQL signed despite some of them certainly appearing to be WHQL signed. I didn't spend too much time diagnosing this issue since I have never been able to successfully get the WHQLFilePublisher rule to work. Instead, I resorted to the FilePublisher rule.
  3. After I felt confident that I had a good driver whitelist, I placed the policy into enforcement mode and rebooted. What resulted was nonstop boot failures. It turns out that if you're whitelisting individual drivers, critical drivers won't show up in the event log in audit mode like ntoskrnl.exe and hal.dll. So I explicitly added rules for them and Nano Server still wouldn't boot. What made things worse is that even if I placed the policy back into audit mode, there were no new blocked driver entries but the system still refused to boot. I rolled the dice and posited that there might be an issue with certificate chain validation at boot time so I created a PCACertificate rule for ntoskrnl.exe (The "Microsoft Code Signing PCA 2010" rule). This miraculously did the trick at the expense of creating a more permissive policy. In the end, I ended up with roughly the equivalent of a Publisher ruleset on my drivers with the exception of my Intel NIC driver.
  4. I explicitly made a FilePublisher rule for my Intel NIC driver as it was the only 3rd part, non-OEM driver I had to add when creating my Nano Server image. I don't need to allow any other code signed by Intel so I explicitly only allow that one driver.
  5. After I got Nano Server to boot, I started working on user-mode rules. This process was relatively straightforward and I used the Publisher rule for user-mode code.
  6. After using Nano Server under audit mode with my new rule set and not seeing any legitimate binaries that would have been blocked, I felt confident in the policy and placed it into audit mode and I haven't run into any issues and I'm using Nano Server as a Hyper-V server (i.e. with the "Compute" package).
I still need to get around to adding my code-signing certificate as an authorized policy signer, sign the policy, and remove "Enabled:Unsigned System Integrity Policy". Overall though, despite the driver issues, I'm fairly content with how well locked down my policy is. It essentially only allows a subset of critical Microsoft code to execute with the exception of the Intel driver which has a very specific file/signature-based rule.

Conclusion

I'm not sure if we'll see improved code integrity or Device Guard support for Nano Server in the future, but something is at least better than nothing. As it stands though, if you are worried about the execution of untrusted PowerShell code, unfortunately, UMCI does nothing to protect you on Nano Server. Code integrity still does a great job of blocking untrusted compiled binaries though - a hallmark of the vast majority of malware campaigns. Nano Server opens up a whole new world of possibilities from a management and malware perspective. I'm personally very interested to see how attackers will try to evolve and support their operations in a Nano Server environment. Fortunately, the combination of Windows Defender and code integrity support offer a solid security baseline.