[PATCH v6 0/5] Allow guest access to EFI confidential computing secret area

Dov Murik dovmurik at linux.ibm.com
Mon Jan 10 11:14:16 UTC 2022



On 07/01/2022 21:16, Peter Gonda wrote:
> On Fri, Jan 7, 2022 at 4:59 AM Borislav Petkov <bp at suse.de> wrote:
>>
>> On Wed, Jan 05, 2022 at 08:07:04PM +0000, Dr. David Alan Gilbert wrote:
>>> I thought I saw something in their patch series where they also had a
>>> secret that got passed down from EFI?
>>
>> Probably. I've seen so many TDX patchsets so that I'm completely
>> confused what is what.
>>
>>> As I remember they had it with an ioctl and something; but it felt to
>>> me if it would be great if it was shared.
>>
>> I guess we could try to share
>>
>> https://lore.kernel.org/r/20211210154332.11526-28-brijesh.singh@amd.com
>>
>> for SNP and TDX.
>>
>>> I'd love to hear from those other cloud vendors; I've not been able to
>>> find any detail on how their SEV(-ES) systems actually work.
>>
>> Same here.
>>
>>> However, this aims to be just a comms mechanism to pass that secret;
>>> so it's pretty low down in the stack and is there for them to use -
>>> hopefully it's general enough.
>>
>> Exactly!
>>
>>> (An interesting question is what exactly gets passed in this key and
>>> what it means).
>>>
>>> All the contentious stuff I've seen seems to be further up the stack - like
>>> who does the attestation and where they get the secrets and how they
>>> know what a valid measurement looks like.
>>
>> It would be much much better if all the parties involved would sit down
>> and decide on a common scheme so that implementation can be shared but
>> getting everybody to agree is likely hard...
> 
> I saw a request for other cloud provider input here. A little
> background for our SEV VMs in GCE we rely on our vTPM for attestation,
> we do this because of SEV security properties quoting from AMD being
> to protect guests from a benign but vulnerable hypervisor. So a
> benign/compliant hypervisor's vTPM wouldn't lie to the guest. So we
> added a few bits in the PCRs to allow users to see their SEV status in
> vTPM quotes.

Thanks Peter for explaining the GCE approach.  If I understand correctly,
if the hypervisor is malicious, it could lie to the vTPM and set those
"few bits" so it'll look like the VM is running with SEV enabled (whereas
in fact it is running without memory encryption).  But that's outside the
scope, right?


> 
> It would be very interesting to offer an attestation solution that
> doesn't rely on our virtual TPM. But after reading through this cover
> letter and the linked OVMF patches I am confused what's the high level
> flow you are working towards? Are you loading in some OVMF using
> LAUNCH_UPDATE_DATA, getting the measurement with LAUNCH_MEASURE, then
> sending that to the customer who can then craft a "secret" (maybe say
> SSH key) for injection with LAUNCH_SECRET? Thats sounds good but there
> are a lot details left unattested there, how do you know you will boot
> from the image loaded with the PSP into a known state? Do you have
> some documentation I could read through to try and understand a little
> more and apologies if I missed it.
> 

We rely on the OvmfPkg/AmdSev build of OVMF, which is a stripped down
firmware that can boot only in one of two ways:

a. direct boot to kernel/initrd/cmdline whose hashes are included in
   the LAUNCH_MEASURE data (see [1], [2], [3]).
b. boot to embedded grub (in OVMF) which decrypts an encrypted boot drive
   using a secret from LAUNCH_SECRET (see [4] and [5]).

For the current series (efi_secret kernel module), method (a) is the relevant
one.  The Cloud Provider boots a guest VM with OVMF, and -kernel/-initrd/-append. The
content of OVMF and the hashes of kernel/initrd/cmdline are measured in LAUNCH_MEASURE
and sent to the Guest Owner (=Customer).  If they match, the Guest Owner should perform 
a LAUNCH_SECRET and inject the secret to a designated SEV launch secrets page.  

Then the guest starts running; when the kernel starts the EFI driver saves the address
of this page (patch 1), and later exposes the secrets in this page as files in
securityfs (patch 3). The secrets can be used by the guest workload in whatever way
it needs (examples discussed above: ssh private key; API key to obtain secrets;
decryption key for some encrypted files; ...). 

Any attempt of a malicious cloud provider to modify OVMF, kernel, initrd, or cmdline
will cause the SEV measurement to be wrong. 

Note again that the proposed kernel patch series has nothing to do with the measurement
sequence -- it assumes the secrets page is already populated, and exposes it
to userspace as files.

Hope this explains our approach.


[1] https://www.youtube.com/watch?v=jzP8RlTRErk
[2] https://lore.kernel.org/qemu-devel/20210930054915.13252-1-dovmurik@linux.ibm.com/
[3] https://lore.kernel.org/qemu-devel/20211111100048.3299424-1-dovmurik@linux.ibm.com/

[4] https://www.youtube.com/watch?v=rCsIxzM6C_I
[5] https://blog.hansenpartnership.com/deploying-encrypted-images-for-confidential-computing/



-Dov



More information about the Linux-security-module-archive mailing list