In previous posts about Device Guard, I spent a lot of time talking about initial code integrity (CI) configurations and bypasses. What I haven't covered until now however is an extremely important topic: how does one effectively install software and update CI policies according? In this post, I will walk you through how I got Chrome installed on my Surface Book running on an enforced Device Guard code integrity policy.
The first questions I posed to myself were:
- Should I place my system into audit mode, install the software, and base an updated policy on CodeIntegrity event log entries?
- Or should I install the software on a separate, non Device Guard protected system, analyze the file footprint, develop a policy based on the installed files, deploy, and test?
My preference is option #2 as I would prefer to not place a system back into audit mode if I can avoid it. That said, audit mode would yield the most accurate results as it would tell you exactly which binaries would have been blocked that you would want to base whitelist rules off of. In this case, there's no right or wrong answer. My decision to go with option #2 was to base my rules solely off binaries that execute post-installation, not during installation. My mantra with whitelisting is to be as restrictive as is reasonable.
So how did I go about beginning to enumerate the file footprint of Chrome?
- I opened Chrome, ran it as I usually would, and used PowerShell to enumerate loaded modules.
- I also happened to know that the Google updater runs as a scheduled task so I wanted to obtain the binaries executed via scheduled tasks as well.
I executed the following to get a rough sense of where Chrome files were installed:
(Get-Process
-Name *Chrome*).Modules.FileName
| Sort-Object
-Unique
(Get-ScheduledTask
-TaskName *Google*).Actions.Execute
| Sort-Object
-Unique
To my surprise and satisfaction, Google manages to house nearly all of its binaries in C:\Program Files (x86)\Google. This allows for a great starting point for building Chrome whitelist rules.
Next, I had to ask myself the following:
- Am I okay with whitelisting anything signed by Google?
- Do I only want to whitelist Chrome? i.e. All Chrome-related EXEs and all DLLs they rely upon.
- I will probably want want Chrome to be able to update itself without Device Guard getting in the way, right?
While I like the idea of whitelisting just Chrome, there are going to be some potential pitfalls. By whitelisting just Chrome, I would need to be aware of every EXE and DLL that Chrome requires to function. I can certainly do that but it would be a relatively work-intensive effort. With that list, I would then create whitelist rules using the FilePublisher file rule level. This would be great initially and it would potentially be the most restrictive strategy while allowing Chrome to update itself. The issue is that what happens when Google decides to include one or more additional DLLs in the software installation? Device Guard will block them and I will be forced to update my policy again. I'm all about applying a paranoid mindset to my policy but at the end of the day, I need to get work done other than constantly updating CI policies.
So the whitelist strategy I choose in this instance is to allow code signed by Google and to allow Chrome to update itself. This strategy equates to using the "Publisher" file rule level - "a combination of the PcaCertificate level (typically one certificate below the root) and the common name (CN) of the leaf certificate. This rule level allows organizations to trust a certificate from a major CA (such as Symantec), but only if the leaf certificate is from a specific company (such as Intel, for device drivers)."
I like the "Publisher" file rule level because it offers the most flexibility, longevity for a specific vendor's code signing certificate. If you look at the certificate chain for chrome.exe, you will see that the issuing PCA (i.e. the issuer above the leaf certificate) is Symantec. Obviously, we wouldn't want to whitelist all code signed by certs issued by Symantec but I'm okay allowing code signed by Google who received their certificate from Symantec.
Certificate chain for chrome.exe |
So now I'm ready to create the first draft of my code integrity rules for Chrome.
I always start by creating a FilePublisher rule set for the binaries I want to whitelist because it allows me to associate what binaries are tied to their respective certificates.
$GooglePEs
= Get-SystemDriver
-ScanPath 'C:\Program
Files (x86)\Google' -UserPEs
New-CIPolicy
-FilePath Google_FilePub.xml
-DriverFiles $GooglePEs
-Level FilePublisher
-UserPEs
What resulted was the following ruleset. Everything looked fine except for a single Microsoft rule generated which was associated with d3dcompiler_47.dll. I looked in my master rule policy and I already had this rule. Me being obsessive compulsive wanted a pristine ruleset including only Google rules. This is good practice anyway once you get in the habit of managing large whitelist rulesets. You'll want to keep separate policy XMLs for each whitelisting scenario you run into and then merge accordingly. After removing the MS binary from the list, what resulted was a much cleaner ruleset (Publisher applied this time) consisting of only two signer rules.
$OnlyGooglePEs
= $GooglePEs
| ? { -not $_.FriendlyName.EndsWith('d3dcompiler_47.dll') }
New-CIPolicy
-FilePath Google_Publisher.xml
-DriverFiles $OnlyGooglePEs
-Level Publisher
-UserPEs
So now, all I should need to do is merge the new rules into my master ruleset, redeploy, reboot, and if all works well, Chrome should install and execute without issue.
$MasterRuleXml
= 'FinalPolicy.xml'
$ChromeRules
= New-CIPolicyRule
-DriverFiles $OnlyGooglePEs
-Level Publisher
Merge-CIPolicy
-OutputFilePath FinalPolicy_Merged.xml
-PolicyPaths $MasterRuleXml
-Rules $ChromeRules
ConvertFrom-CIPolicy
-XmlFilePath .\FinalPolicy_Merged.xml
-BinaryFilePath SIPolicy.p7b
# Finally, on the Device
Guard system, replace the existing
# SIPolicy.p7b with the
one that was just generated and reboot.
One thing I neglected to account for was the initial Chrome installer binary. I could have incorporated the binary into this process but I wanted to try my luck that Google used the same certificates to sign the installer binary. To my luck, they did and everything installed and executed perfectly. I would consider myself lucky in this case because I selected a software publisher (Google) who employs decent code signing practices.
Conclusion
In future blog posts, I will document my experiences deploying software that doesn't adhere to proper signing practices or doesn't even sign their code. Hopefully, the Google Chrome case study will, at a minimum, ease you into the process of updating code integrity policies for new software deployments.
The bottom line is that this isn't an easy process. Are there ways in which Microsoft could improve the code integrity policy generation/update/deployment/auditing experience? Absolutely! Even if they did though, the responsibility ultimately lies on you to make informed decisions about what software you trust and how you choose to enforce that trust!