[GIT PULL] Kernel lockdown for secure boot

Andy Lutomirski luto at kernel.org
Thu Apr 5 18:47:52 UTC 2018


On Wed, Apr 4, 2018 at 11:42 AM, Peter Jones <pjones at redhat.com> wrote:
> On Tue, Apr 03, 2018 at 02:51:23PM -0700, Andy Lutomirski wrote:
>> On Tue, Apr 3, 2018 at 12:29 PM, Matthew Garrett <mjg59 at google.com> wrote:
>> Can someone please explain why the UEFI crowd cares so much about "as
>> a bootloader"?  Once I'm able to install an OS (Linux kernel +
>> bootloader, Windows embedded doodad, OpenBSD, whatever) on your
>> machine, I can use your peripherals, read your data, write your data,
>> see your keystrokes, use your network connection, re-flash your BIOS
>> (at least as well as any OS can), run VMs, and generally own your
>> system.  Somehow you all seem fine with all of this, except that the
>> fact that I can chainload something else gives UEFI people the
>> willies.
>>
>> Can someone explain why?
>
> There's no inherent difference, in terms of the trust chain, between
> compromising it to use the machine as a toaster or to run a botnet - the
> trust chain is compromised either way.  But you're much more likely to
> notice if your desktop starts producing bread products than if it hides
> some malware and keeps on booting, and the second one is much more
> attractive to attackers anyway.
>
> The reason we talk about it as a bootloader is because of the model
> employed by malware.  I'm sure you know that one kind of malware that
> exists in the wild, a so-called "boot kit", operates by modifying a
> kernel during load (or on disk before loading) so that it has some
> malicious payload, like exfiltrating user data or allowing a way to
> install software that the kernel hides or *whatever*, and incorporating
> some way to achieve relative persistence on the system - for example
> hiding the real boot settings and loading a kernel with a different than
> normal initramfs that loads an exploit before continuing with a normal
> looking boot.

This is a fair point, but I wonder how much it matters in practice.
If I'm writing a bootkit, I can think of at least four ways to do it.

1. The easy way.  Write a malicious bootloader that modifies the
kernel image to insert malicious code.  Stock secure boot makes this
awkward because you need a signed bootloader.  It's worth noting that
a non-locked-down signed Linux kernel is actually a rather awkward way
to do this because it will add several seconds to the boot and may
show a splash screen unless you're rather careful.

2. The CPL3 way.  Write a malicious initramfs that inserts the
malicious code in PID 1 instead.  This might be easier to get working
across a variety of Linux kernels, but it's more awkward to hide well
from userspace.  Conventional secure boot (with the stock MS keys)
doesn't help at all.

3. The nasty way.  Find a known exploitable kernel or bootloader, and
use it to do your evil deeds.  This is very, very hard to protect
against with normal secure boot.

4. The VM-kit way.  Use a signed, locked down, perfectly secure kernel
and run your pwned system as a VM guest.  Secure boot doesn't help one
whit.

*All* of these variants are avoided by a real, working verified boot
approach that chains all the way down to the running system image, and
*that* solution doesn't need cpl0 and cpl3 to be separated.

So I find myself wondering whether the bootkit argument is actually
very compelling.
--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



More information about the Linux-security-module-archive mailing list