LSM stacking in next for 6.1?

Casey Schaufler casey at schaufler-ca.com
Wed Sep 7 23:26:38 UTC 2022


On 9/7/2022 4:04 PM, Paul Moore wrote:
> On Wed, Sep 7, 2022 at 1:08 PM Casey Schaufler <casey at schaufler-ca.com> wrote:
>> On 9/7/2022 8:13 AM, Paul Moore wrote:
>>> On Tue, Sep 6, 2022 at 8:31 PM Casey Schaufler <casey at schaufler-ca.com> wrote:
>>>> On 9/6/2022 4:24 PM, Paul Moore wrote:
>>>>> On Fri, Sep 2, 2022 at 7:14 PM Casey Schaufler <casey at schaufler-ca.com> wrote:
>>>>>> On 9/2/2022 2:30 PM, Paul Moore wrote:
>>>>>>> On Tue, Aug 2, 2022 at 8:56 PM Paul Moore <paul at paul-moore.com> wrote:
>>>>>>>> On Tue, Aug 2, 2022 at 8:01 PM Casey Schaufler <casey at schaufler-ca.com> wrote:
>>>>>>>>> I would like very much to get v38 or v39 of the LSM stacking for Apparmor
>>>>>>>>> patch set in the LSM next branch for 6.1. The audit changes have polished
>>>>>>>>> up nicely and I believe that all comments on the integrity code have been
>>>>>>>>> addressed. The interface_lsm mechanism has been beaten to a frothy peak.
>>>>>>>>> There are serious binder changes, but I think they address issues beyond
>>>>>>>>> the needs of stacking. Changes outside these areas are pretty well limited
>>>>>>>>> to LSM interface improvements.
>>>>>>>> The LSM stacking patches are near the very top of my list to review
>>>>>>>> once the merge window clears, the io_uring fixes are in (bug fix), and
>>>>>>>> SCTP is somewhat sane again (bug fix).  I'm hopeful that the io_uring
>>>>>>>> and SCTP stuff can be finished up in the next week or two.
>>>>>>>>
>>>>>>>> Since I'm the designated first stuckee now for the stacking stuff I
>>>>>>>> want to go back through everything with fresh eyes, which probably
>>>>>>>> isn't a bad idea since it has been a while since I looked at the full
>>>>>>>> patchset from bottom to top.  I can tell you that I've never been
>>>>>>>> really excited about the /proc changes, and believe it or not I've
>>>>>>>> been thinking about those a fair amount since James asked me to start
>>>>>>>> maintaining the LSM.  I don't want to get into any detail until I've
>>>>>>>> had a chance to look over everything again, but just a heads-up that
>>>>>>>> I'm not too excited about those bits.
>>>>>>> As I mentioned above, I don't really like the stuff that one has to do
>>>>>>> to support LSM stacking on the existing /proc interfaces, the
>>>>>>> "label1\0label2\labelN\0" hack is probably the best (only?) option we
>>>>>>> have for retrofitting multiple LSMs into those interfaces and I think
>>>>>>> we can all agree it's not a great API.  Considering that applications
>>>>>>> that wish to become simultaneous multi-LSM aware are going to need
>>>>>>> modification anyway, let's take a step back and see if we can do this
>>>>>>> with a more sensible API.
>>>>>> This is a compound problem. Some applications, including systemd and dbus,
>>>>>> will require modification to completely support multiple concurrent LSMs
>>>>>> in the long term. This will certainly be the case should someone be wild
>>>>>> and crazy enough to use Smack and SELinux together. Even with the (Smack or
>>>>>> SELinux) and AppArmor case the ps(1) command should be educated about the
>>>>>> possibility of multiple "current" values. However, in a container world,
>>>>>> where an Android container can run on an Ubuntu system, the presence of
>>>>>> AppArmor on the base system is completely uninteresting to the SELinux
>>>>>> aware applications in the container. This is a real use case.
>>>>> If you are running AppArmor on the host system and SELinux in a
>>>>> container you are likely going to have some *very* bizarre behavior as
>>>>> the SELinux policy you load in the container will apply to the entire
>>>>> system, including processes which started *before* the SELinux policy
>>>>> was loaded.  While I understand the point you are trying to make, I
>>>>> don't believe the example you chose is going to work without a lot of
>>>>> other changes.
>>>> I don't use it myself, but I know it's frighteningly popular.
>>> All right, I'm going to call your bluff here - who are these people
>>> running AppArmor on the host and SELinux in a container?  What policy
>>> are they using, it's surely not an unmodified Fedora/RHEL or upstream
>>> refpol policy?  Do they run in enforcing mode without massive
>>> permissions granted to kernel_t (I'm guessing all of the host
>>> applications would appear as kernel_t)?  How do you handle multiple
>>> SELinux containers?
>> Beats me. All that SELinux policy stuff is over my head. ;)
>>
>> Seriously, once they got the stacking patches applied they thanked
>> me for the help and disappeared until they decided to update the
>> kernel version and asked for help with the next round of patches.
>> They told me what they wanted to do, which was to run Android in
>> a container, but how they accomplished it was a set of details they
>> didn't share. I assume that you are right that they had to do
>> horrible things to either AppArmor or SELinux policy, or maybe both.
>> I also assume they wanted this as an environment to develop Android
>> applications, and may not have cared much about actual enforcement.
>> But they are happy users.
>>
>>> I'm aware of *one* use case where SELinux is run in a container and
>>> that required a number of careful constraints on the use case and a
>>> good deal of hacking to enable.  I'm sure there are definitely people
>>> that *want* this, especially in the context of Ubuntu, but I really
>>> doubt this is in widespread use today.
>> What I know is that there is a community out there using it. I think
>> you're right that the way they're using it would be displeasing to
>> most of us.
> Based on other comments in this thread it doesn't appear that there is
> anyone using it,

Let's just discard that use case then.

>  or at least not a significant percentage of users.

Sure.

>   I
> get that sometimes we need to interpolate/extrapolate a bit to
> understand what users are actually doing, especially with certain
> security-focused users, but I think you extrapolated (or assumed) a
> bit too much in this case.

Let's just assume that.

>   Please be more clear when you are
> speculating in the future, there may be folks reading these mailing
> lists that don't have the background or understanding to tell
> assumptions from actual truth.

I erred in siting an example for which I am not positioned to provide
backup detail. My bad.




More information about the Linux-security-module-archive mailing list