[PATCH v6 5/5] kernfs: initialize security of newly created nodes
Casey Schaufler
casey at schaufler-ca.com
Tue Feb 19 16:43:05 UTC 2019
On 2/19/2019 6:10 AM, Ondrej Mosnacek wrote:
> On Tue, Feb 19, 2019 at 1:28 AM Casey Schaufler <casey at schaufler-ca.com> wrote:
>> On 2/18/2019 2:03 AM, Ondrej Mosnacek wrote:
>>> On Fri, Feb 15, 2019 at 4:50 PM Tejun Heo <tj at kernel.org> wrote:
>>>> On Fri, Feb 15, 2019 at 04:45:44PM +0100, Ondrej Mosnacek wrote:
>>>>> On Thu, Feb 14, 2019 at 4:49 PM Tejun Heo <tj at kernel.org> wrote:
>>>>>> On Thu, Feb 14, 2019 at 10:50:15AM +0100, Ondrej Mosnacek wrote:
>>>>>>> +static int kernfs_node_init_security(struct kernfs_node *parent,
>>>>>>> + struct kernfs_node *kn)
>>>>>> Can we skip the whole thing if security is not enabled?
>>>>> Do you mean just skipping the whole part when CONFIG_SECURITY=n? That
>>>>> is easy to do and I can add it in the next respin (although the
>>>>> compiler should be able to optimize most of it out in that case).
>>>> So the goal is allowing folks who don't use this to not pay. It'd be
>>>> better the evaulation can be as late as possible but obviously there's
>>>> a point where that'd be too complicated. Maybe "ever enabled in this
>>>> boot" is a good and simple enough at the same time?
>>> I don't think there is a way currently to check whether some LSM has
>>> been enabled at boot or not. I suppose we could add such function for
>>> this kind of heuristics, but I'm not sure how it would interplay with
>>> the plans to allow multiple LSM to be enabled simultaneously...
>>> Perhaps it would be better/easier to just add a
>>> security_kernfs_needs_init() function, which would simply check if the
>>> list of registered kernfs_init_security hooks is empty.
>>>
>>> I propose something like the patch below (the whitespace is mangled -
>>> intended just for visual review). I plan to fold it into the next
>>> respin if there are no objections to this approach.
>>>
>>> diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
>>> index 735a6d382d9d..5b99205da919 100644
>>> --- a/fs/kernfs/dir.c
>>> +++ b/fs/kernfs/dir.c
>>> @@ -625,6 +625,9 @@ static int kernfs_node_init_security(struct
>>> kernfs_node *parent,
>>> struct qstr q;
>>> int ret;
>>>
>>> + if (!security_kernfs_needs_init() || !parent)
>>> + return 0;
>>> +
>>> if (!parent->iattr) {
>>> kernfs_iattr_init(&iattr_parent, parent);
>>> simple_xattrs_init(&xattr_parent);
>>> @@ -720,11 +723,9 @@ static struct kernfs_node
>>> *__kernfs_new_node(struct kernfs_root *root,
>>> goto err_out3;
>>> }
>>>
>>> - if (parent) {
>>> - ret = kernfs_node_init_security(parent, kn);
>>> - if (ret)
>>> - goto err_out3;
>>> - }
>>> + ret = kernfs_node_init_security(parent, kn);
>>> + if (ret)
>>> + goto err_out3;
>>>
>>> return kn;
>>>
>>> diff --git a/include/linux/security.h b/include/linux/security.h
>>> index 581944d1e61e..49a083dbc464 100644
>>> --- a/include/linux/security.h
>>> +++ b/include/linux/security.h
>>> @@ -292,6 +292,7 @@ int security_inode_listsecurity(struct inode
>>> *inode, char *buffer, size_t buffer
>>> void security_inode_getsecid(struct inode *inode, u32 *secid);
>>> int security_inode_copy_up(struct dentry *src, struct cred **new);
>>> int security_inode_copy_up_xattr(const char *name);
>>> +int security_kernfs_needs_init(void);
>>> int security_kernfs_init_security(const struct qstr *qstr,
>>> const struct iattr *dir_iattr,
>>> struct simple_xattrs *dir_secattr,
>>> @@ -789,6 +790,11 @@ static inline int security_inode_copy_up(struct
>>> dentry *src, struct cred **new)
>>> return 0;
>>> }
>>>
>>> +static inline int security_kernfs_needs_init(void)
>>> +{
>>> + return 0;
>>> +}
>>> +
>>> static inline int security_kernfs_init_security(
>>> const struct qstr *qstr, const struct iattr *dir_iattr,
>>> struct simple_xattrs *dir_secattr, const struct iattr *iattr,
>>> diff --git a/security/security.c b/security/security.c
>>> index 836e0822874a..3c8b9b5baabc 100644
>>> --- a/security/security.c
>>> +++ b/security/security.c
>>> @@ -892,6 +892,11 @@ int security_inode_copy_up_xattr(const char *name)
>>> }
>>> EXPORT_SYMBOL(security_inode_copy_up_xattr);
>>>
>>> +int security_kernfs_needs_init(void)
>>> +{
>>> + return !hlist_empty(&security_hook_heads.kernfs_init_security);
>>> +}
>>> +
>> Yuck. That's an awful lot of infrastructure just to track
>> that state. May I suggest that instead you have the
>> security_kernfs_init_security() hook return -EOPNOTSUPP
>> in the no-LSM case (2nd argument to call_in_hook). You could
>> then have a state flag in kernfs that you can set to indicate
>> you don't need to call security_kernfs_init_security() again.
> Well, maintaining a global variable sounds even more yucky to me...
The state you're maintaining is kernfs state, not LSM
infrastructure state. The state should be maintained in
kernfs, not in the LSM infrastructure.
> And I don't understand why you'd consider a simple one-line function
> to be "an awful lot of infrastructure" :)
That is because you haven't been working with the LSM
infrastructure very long. If you'll recall, I have already
objected to kernfs specific LSM hooks. Now, when you find
that your approach for using a hook has issues, you want
to add another function that does nothing but tell you
that there's nothing to do. You can get that by calling
the security_kernfs_init_security() hook. The LSM infrastructure
is set up to be as painless as possible when you don't
want to use it.
> But at the end of the day it
> is up to the maintainers - Greg/Tejun and James/Serge (who I forgot to
> Cc on these patches, sorry)
Yes, James and Serge are the maintainers for the
security sub-system, but I have done the transition
from hook vector to hook lists and am currently doing
the transition from module to infrastructure blob management.
I know it better, and have more invested in it, than anyone
else just now.
> - what works better for them.
Kernfs is an important component of the kernel. So is
the security infrastructure. I would hope you don't want
to turn this into a contest to see which maintainer has
the biggest clout.
More information about the Linux-security-module-archive
mailing list