[RFC PATCH v2 1/3] x86/sgx: Add SGX specific LSM hooks

Casey Schaufler casey at schaufler-ca.com
Mon Jul 1 23:11:52 UTC 2019


On 7/1/2019 2:45 PM, Xing, Cedric wrote:
>> From: linux-sgx-owner at vger.kernel.org [mailto:linux-sgx-
>> owner at vger.kernel.org] On Behalf Of Casey Schaufler
>> Sent: Monday, July 01, 2019 12:54 PM
>>> If you understand the purpose,
>> The purpose is to support the SGX hardware, is it not?
>> If you don't have SGX hardware (e.g. MIPS, ARM, s390) you don't need
>> this code.
> No, it is NOT to support SGX

Then what *is* it for?

>  - i.e. SGX doesn't require this piece of code to work.
>
> And as Dr. Greg pointed out, it can be used for other TEEs than SGX.

That sure makes it sound like it's for SGX to me.

>  It doesn't contain SGX h/w specifics.

I never said it did. But no one ever suggested doing anything
here before SGX, and your subject line:

	"x86/sgx: Add SGX specific LSM hooks"

says it does.

> It is compiled out because there's no module calling it on other architectures today. But it doesn't conflict with any h/w and may be useful (for other TEEs) on other architectures in future.
>
>>> then why are you objecting the lsm_ prefix as they are APIs to be used
>> by all LSM modules?
>>
>> We name interfaces based on what they provide, not who consumes them.
>> As your code provides enclave services, that is how they should be named.
>>
>>>  Or what should be the prefix in your mind?
>> I'm pretty sure that I've consistently suggested "enclave".
>>
>>> Or what kind of APIs do you think can qualify the lsm_ prefix?
>> Code that implements the LSM infrastructure. There is one LSM blob
>> allocation interface, lsm_inode_alloc(), that is used in early set-up
>> that is exported. As I've mentioned more than once, enclave/page
>> management is not an LSM infrastructure function, it's a memory
>> management function.
> It doesn't manage anything.

Sorry, "memory management" as in all that stuff around pages and
TLBs and who gets what pages, as opposed to keeping track of anything
on its own.

> The reason it appears in the infrastructure is because the decision of inserting an EMA depends on the decisions from *all* active LSMs.

You have not been listening. Most LSMs use VFS. We haven't rolled VFS
functions into the LSM infrastructure.

> That is NOT new either, as you can see it in security_file_permission() and security_vm_enough_memory_mm(), both do something after all LSM modules make their decisions.

Did you look to see what it is they're doing? If you had,
you would see that is nothing like what you're proposing.


> Would you please explain why you don't see those as problems but calling EMA functions in security_enclave_load() is a problem?

The enclave code should be calling security_enclave_load(),
not the other way around. That assumes you're using the naming
convention correctly.

security_vm_enough_memory_mm() was discussed at length and there
wasn't a clean way to get the logic right without putting the code
here. security_file_permission() has the fs_notify_perm call for
similar reasons. Neither is anything like what you're suggesting.


>>> And I'd like to clarify that it doesn't break anything, but is just a
>> bit different, in that security_enclave_load() and security_file_free()
>> call into those APIs.
>>
>> There should be nothing in security_enclave_load() except calls to the
>> enclave_load() hooks (e.g. selinux_enclave_load()). There should be
>> nothing in security_file_free() except file blob management calls to the
>> file_free() hooks (e.g. apparmor_file_free()).
> As above, there are examples in security/security.c where the hook does more than just calling registered hooks from LSMs.

And as I've said, that doesn't matter. You're still going about
using the LSM infrastructure backwards.

>>> But there's a need for them because otherwise code/data would have to
>>> be duplicated among LSMs
>> There's plenty of code duplication among the LSMs, because a lot of what
>> they do is the same thing. Someday there may be an effort to address
>> some of that, but I don't think it's on anybody's radar.
>> As for data duplication, there's a reason we use lots of pointers.
> As stated above, security_enclave_load() needs to do something extra after all LSMs make their decisions. How can pointers help here?

I can explain it, but you clearly have no interest in doing
anything to make your code fit into the system. I have a lot
of other things to be doing.

>>> and the logic would be harder to comprehend.
>> Keeping the layering clean is critical to comprehension.
>> There's a lot of VFS code that could have been implemented within the
>> LSM infrastructure, but I don't think that anyone would argue that it
>> should have been.
>>
>>> So that's a trade-off.
>> I remain completely unconvinced that your proposal represents a good way
>> to implement you scheme.
>>
>>> Then what's the practical drawback of doing that?
>> Name space pollution.
> Alright, I can fix the names.

Good!


>> Layering violation.
> Not sure what you are referring to.

The only places where the blob freed by security_file_free()
may be allocated is security_file_alloc(). The security modules
are welcome to do anything they like in addition, provided
they clean up after themselves in their file_free() hooks.

If SELinux wants to support controls on enclave information,
and that requires additional data, SELinux should include
space in its file blob for that information, or a pointer to
the place where the enclave code is maintaining it.

That's the way audit works.

> If you are referring to buffers allocated in one layer and freed in elsewhere, you have got the code wrong. Buffers allocated in security_enclave_load() is freed in security_file_free().

It's up to the security module's file_free() to clean up anything that
wasn't allocated in security_file_free(). Interested security modules
should call enclave_load(), and put the information into their portion
of the security blob. The module specific code can call enclave_file_free(),
or whatever interface you want to provide, to clean up. That might take
place in file_free(), but it also might be elsewhere.


> Whatever else allocated in LSMs are not seen or taken care of by the infrastructure. The purpose of allocating EMAs in enclave_load() is trying to minimize overhead for non-enclave files, otherwise it could be done in file_alloc() to be more "paired" with file_free(). But I don't see it necessary.

Try looking at maintaining what you've put into the LSM code as
a separate entity. It makes it simpler. Really.

>> Architecture specific implementation detail in a general infrastructure.
> Stated earlier, it doesn't contain any h/w specifics but just a TEE abstraction.

Then put it in the TEE system.

> It could be left on all the time or controlled by a different config macro.

True in any case.

> It is contingent to CONFIG_INTEL_SGX just for convenience, as SGX is the first (and only so far) TEE that needs attention from LSM, but there could be more in future.

All the more reason to keep it separate. These things never get simpler
when they get more generalized.

>>> If no, why would we want to pay for the cost for not doing that?
>> Modularity and maintainability come directly to mind.
> Putting it elsewhere will incur more maintenance cost.

I don't believe that for a second. 40 years of C programming
have taught me that trying to do multiple things in one place
is always a bad idea.


>>>> ... and the notion that you allocate data for one blob that gets
>>>> freed relative to another breaks the data management model.
>>> What do you mean here?
>> You're freeing the EMA data from security_file_free().
>> If selinux wants to free EMA data it has allocated in
>> selinux_enclave_load() in selinux_file_free() that's fine, but the LSM
>> infrastructure has no need to know about it.
>> EMA needs to manage its own data, just like VFS does.
>> The LSM infrastructure provides blob management so that the security
>> modules can extend data if they want to.
> You've got the code wrong. selinux_enclave_load() doesn't allocate any memory.  selinux_file_mprotect() may, due to EMA split. But that's transparent to all LSMs.

... and the LSM infrastructure doesn't care and
must not be made to care. It's all up to SELinux.

> The LSM infrastructure doesn't know anything about what LSM modules do, nor does it manage any buffers allocated by any LSM modules.

Right, which is why putting your lsm_ema_blob is wrong, and why
forcing into the file blob is wrong.

> EMA is currently managing its own data. What's needed is the trigger - to let EMA know when to update its states. The trigger could be placed in LSM infrastructure or inside individual LSMs.

Yes. The latter.

> The reason to put it in the infrastructure, is that it depends on the decision of *all* LSMs whether to insert a new EMA.

That's basic stacking behavior. "Bail on fail", which says that once
denial is detected, you're done.

> That's similar to vm_enough_memory() where the final __vm_enough_memory() call is made by the infrastructure but not individual LSMs.

Do you really understand the painful reasons that case is required?
And if so, why you aren't taking steps to avoid them?


>>> EMA blobs are allocated/freed *not* relative to any other blobs.
>> In the code you proposed they are freed in security_file_free().
>> That is for file blob management.
> Yes. EMA contributes to the file blob. But it only frees memory allocated by the infrastructure itself, not anything from any LSM modules.

That's not the way it's supposed to be done. The module tells
the infrastructure what it needs, which may include space for
EMA data. The module asks EMA for the data it needs and stuffs
it somewhere, and the file blob is a fine choice. The module
cleans up in file_free, or at any time before that. If no module
uses EMA, nothing goes in the blob. If two modules use EMA each
is responsible for the data it uses, which may be the same or
may be different.

I've looked at your code. Making it work the way it should would
not be difficult and would likely simplify a bunch of it. 





More information about the Linux-security-module-archive mailing list