[PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
KP Singh
kpsingh at kernel.org
Mon Feb 6 13:04:16 UTC 2023
On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook at chromium.org> wrote:
>
> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> > The indirect calls are not really needed as one knows the addresses of
[...]
> > A static key guards whether an LSM static call is enabled or not,
> > without this static key, for LSM hooks that return an int, the presence
> > of the hook that returns a default value can create side-effects which
> > has resulted in bugs [1].
>
> I think this patch has too many logic changes in it. There are basically
> two things going on here:
>
> - replace list with unrolled calls
> - replace calls with static calls
>
> I see why it was merged, since some of the logic that would be added for
> step 1 would be immediate replaced, but I think it might make things a
> bit more clear.
>
> There is likely a few intermediate steps here too, to rename things,
> etc.
I can try to split this patch, but I am not able to find a decent
slice without duplicating a lot of work which will get squashed in the
end anyways.
>
> > There are some hooks that don't use the call_int_hook and
> > call_void_hook. These hooks are updated to use a new macro called
> > security_for_each_hook where the lsm_callback is directly invoked as an
[...]
> > + * Call the macro M for each LSM hook MAX_LSM_COUNT times.
> > + */
> > +#define LSM_UNROLL(M, ...) UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__)
>
> I think this should be:
>
> #define LSM_UNROLL(M, ...) do { \
> UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__); \
> } while (0)
>
> or maybe UNROLL needs the do/while.
UNROLL is used for both declaration and loop unrolling. So I have
split the LSM macros into LSM_UNROLL to LSM_LOOP_UNROLL (which is
surrounded by a do/while) and an LSM_DEFINE_UNROLL which is not.
>
> >
> > /**
> > * union security_list_options - Linux Security Module hook function list
> > @@ -1657,21 +1677,48 @@ union security_list_options {
> > #define LSM_HOOK(RET, DEFAULT, NAME, ...) RET (*NAME)(__VA_ARGS__);
> > #include "lsm_hook_defs.h"
> > #undef LSM_HOOK
> > + void *lsm_callback;
> > };
> >
> > -struct security_hook_heads {
> > - #define LSM_HOOK(RET, DEFAULT, NAME, ...) struct hlist_head NAME;
> > - #include "lsm_hook_defs.h"
> > +/*
> > + * @key: static call key as defined by STATIC_CALL_KEY
> > + * @trampoline: static call trampoline as defined by STATIC_CALL_TRAMP
> > + * @hl: The security_hook_list as initialized by the owning LSM.
[...]
> > + struct lsm_static_call NAME[MAX_LSM_COUNT];
> > + #include <linux/lsm_hook_defs.h>
> > #undef LSM_HOOK
> > } __randomize_layout;
> >
> > /*
> > * Security module hook list structure.
> > * For use with generic list macros for common operations.
> > + *
> > + * struct security_hook_list - Contents of a cacheable, mappable object.
[...]
> > -#define LSM_HOOK_INIT(HEAD, HOOK) \
> > - { .head = &security_hook_heads.HEAD, .hook = { .HEAD = HOOK } }
> > +#define LSM_HOOK_INIT(NAME, CALLBACK) \
> > + { \
> > + .scalls = static_calls_table.NAME, \
> > + .hook = { .NAME = CALLBACK } \
> > + }
> >
> > -extern struct security_hook_heads security_hook_heads;
> > extern char *lsm_names;
> >
> > extern void security_add_hooks(struct security_hook_list *hooks, int count,
> > @@ -1756,10 +1805,21 @@ extern struct lsm_info __start_early_lsm_info[], __end_early_lsm_info[];
> > static inline void security_delete_hooks(struct security_hook_list *hooks,
> > int count)
>
> Hey Paul, can we get rid of CONFIG_SECURITY_SELINUX_DISABLE yet? It's
> been deprecated for years....
>
> > {
> > - int i;
> > + struct lsm_static_call *scalls;
> > + int i, j;
> > +
> > + for (i = 0; i < count; i++) {
> > + scalls = hooks[i].scalls;
> > + for (j = 0; j < MAX_LSM_COUNT; j++) {
> > + if (scalls[j].hl != &hooks[i])
> > + continue;
> >
> > - for (i = 0; i < count; i++)
> > - hlist_del_rcu(&hooks[i].list);
> > + static_key_disable(scalls[j].enabled_key);
> > + __static_call_update(scalls[j].key,
> > + scalls[j].trampoline, NULL);
> > + scalls[j].hl = NULL;
> > + }
> > + }
> > }
> > #endif /* CONFIG_SECURITY_SELINUX_DISABLE */
> >
> > @@ -1771,5 +1831,6 @@ static inline void security_delete_hooks(struct security_hook_list *hooks,
> > #endif /* CONFIG_SECURITY_WRITABLE_HOOKS */
> >
> > extern int lsm_inode_alloc(struct inode *inode);
> > +extern struct lsm_static_calls_table static_calls_table __ro_after_init;
[...]
> >
> > +/*
> > + * Define static calls and static keys for each LSM hook.
> > + */
> > +
> > +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...) \
> > + DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM), \
> > + *((RET(*)(__VA_ARGS__))NULL)); \
> > + DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
>
> Hm, another place where we would benefit from having separated logic for
> "is it built?" and "is it enabled by default?" and we could use
> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> trying to optimize for having LSMs, I think we should default to inline
> calls. (The machine code in the commit log seems to indicate that they
> are out of line -- it uses jumps.)
>
I should have added it in the commit description, actually we are
optimizing for "hot paths are less likely to have LSM hooks enabled"
(eg. socket_sendmsg). But I do see that there are LSMs that have these
enabled. Maybe we can put this behind a config option, possibly
depending on CONFIG_EXPERT?
> > [...]
> > +#define __CALL_STATIC_VOID(NUM, HOOK, ...) \
> > + if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
> > + static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__); \
> > + }
>
> Same here -- I would expect this to be static_branch_likely() or we'll
> get out-of-line branches. Also, IMO, this should be:
>
> do {
> if (...)
> static_call(...);
> } while (0)
>
Note we cannot really omit the semicolon here. We also use the UNROLL
macro for declaring struct members which cannot assume that the MACRO
to UNROLL will be terminated by a semicolon.
>
> > +#define call_void_hook(FUNC, ...) \
> > + do { \
> > + LSM_UNROLL(__CALL_STATIC_VOID, FUNC, __VA_ARGS__) \
> > } while (0)
>
> With the do/while in LSM_UNROLL, this is more readable.
Agreed, done.
>
> >
> > -#define call_int_hook(FUNC, IRC, ...) ({ \
> > - int RC = IRC; \
> > - do { \
> > - struct security_hook_list *P; \
> > - \
> > - hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
> > - RC = P->hook.FUNC(__VA_ARGS__); \
> > - if (RC != 0) \
> > - break; \
> > - } \
> > - } while (0); \
> > - RC; \
> > -})
> > +#define __CALL_STATIC_INT(NUM, R, HOOK, ...) \
> > + if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
> > + R = static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__); \
> > + if (R != 0) \
> > + goto out; \
> > + }
>
> I would expect the label to be a passed argument, but maybe since it
> never changes, it's fine as-is?
I think passing the label as an argument is cleaner, so I went ahead
and did that.
>
> And again, I'd expect a do/while wrapper, and for it to be s_b_likely.
The problem with the do/while wrapper here again is that UNROLL macros
are terminated by semi-colons which does not work for unrolling struct
member initialization, in particular for the static_calls_table where
it's used to initialize the array.
Now we could do what I suggested in LSM_LOOP_UNROLL and
LSM_DEFINE_UNROLL and push the logic up to UNROLL into:
* UNROLL_LOOP: Which expects a macro that will be surrounded by a
do/while and terminated by a semicolon in the unroll body
* UNROLL_DEFINE (or UNROLL_RAW) where you can pass anything.
What do you think?
>
> > +
> > +#define call_int_hook(FUNC, IRC, ...) \
> > + ({ \
> > + __label__ out; \
> > + int RC = IRC; \
> > + do { \
> > + LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__) \
> > + \
> > + } while (0); \
>
> Then this becomes:
>
> ({
> int RC = IRC;
> LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__);
> out:
> RC;
> })
>
> > +#define security_for_each_hook(scall, NAME, expression) \
> > + for (scall = static_calls_table.NAME; \
> > + scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++) { \
> > + if (!static_key_enabled(scall->enabled_key)) \
> > + continue; \
> > + (expression); \
> > + }
>
> Why isn't this using static_branch_enabled()? I would expect this to be:
I am guessing you mean static_branch_likely, I tried using
static_branch_likely here and it does not work, the key is now being
visited from a loop counter and the static_branch_likely needs it at
compile time. So one needs to resort to the underlying static_key
implementation. Doing this causes:
./arch/x86/include/asm/jump_label.h:27:20: error: invalid operand for
inline asm constraint 'i'
./arch/x86/include/asm/jump_label.h:27:20: error: invalid operand for
inline asm constraint 'i'
The operand needs to be an immediate operand and needs patching at
runtime. I think it's okay we are already not doing any optimization
here as we still have the indirect call.
TL; DR static_branch_likely cannot depend on runtime computations
>
> #define security_for_each_hook(scall, NAME) \
> for (scall = static_calls_table.NAME; \
> scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++) \
> if (static_branch_likely(scall->enabled_key))
I like this macro, it does make the code easier to read thanks.
>
> >
> > /* Security operations */
> >
> > @@ -859,7 +924,7 @@ int security_settime64(const struct timespec64 *ts, const struct timezone *tz)
> >
> > int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
> > {
> > - struct security_hook_list *hp;
> > + struct lsm_static_call *scall;
> > int cap_sys_admin = 1;
> > int rc;
> >
> > @@ -870,13 +935,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
> > * agree that it should be set it will. If any module
> > * thinks it should not be set it won't.
> > */
> > - hlist_for_each_entry(hp, &security_hook_heads.vm_enough_memory, list) {
> > - rc = hp->hook.vm_enough_memory(mm, pages);
> > + security_for_each_hook(scall, vm_enough_memory, ({
> > + rc = scall->hl->hook.vm_enough_memory(mm, pages);
> > if (rc <= 0) {
> > cap_sys_admin = 0;
> > break;
> > }
> > - }
> > + }));
>
> Then these replacements don't look weird. This would just be:
>
> security_for_each_hook(scall, vm_enough_memory) {
> rc = scall->hl->hook.vm_enough_memory(mm, pages);
> if (rc <= 0) {
> cap_sys_admin = 0;
> break;
> }
> }
Agreed, Thanks!
>
> I'm excited to have this. The speed improvements are pretty nice.
Yay!
>
> --
> Kees Cook
More information about the Linux-security-module-archive
mailing list