Perf Data on LSM in v5.3

Stephen Smalley sds at tycho.nsa.gov
Fri Jan 24 14:57:25 UTC 2020


On 1/15/20 11:04 AM, Stephen Smalley wrote:
> On 1/15/20 10:59 AM, Wenhui Zhang wrote:
>>
>>
>> On Wed, Jan 15, 2020 at 10:41 AM Stephen Smalley <sds at tycho.nsa.gov 
>> <mailto:sds at tycho.nsa.gov>> wrote:
>>
>>     On 1/15/20 10:34 AM, Stephen Smalley wrote:
>>      > On 1/15/20 10:21 AM, Wenhui Zhang wrote:
>>      >>
>>      >> On Wed, Jan 15, 2020 at 9:08 AM Stephen Smalley
>>     <sds at tycho.nsa.gov <mailto:sds at tycho.nsa.gov>
>>      >> <mailto:sds at tycho.nsa.gov <mailto:sds at tycho.nsa.gov>>> wrote:
>>      >>
>>      >>     On 1/15/20 8:40 AM, Stephen Smalley wrote:
>>      >>      > On 1/14/20 8:00 PM, Wenhui Zhang wrote:
>>      >>      >> Hi, John:
>>      >>      >>
>>      >>      >> It seems like, the MAC hooks are default to*return 0 or
>>     empty
>>      >> void
>>      >>      >> hooks* if CONFIG_SECURITY, CONFIG_SECURITY_NETWORK ,
>>      >>      >> CONFIG_PAGE_TABLE_ISOLATION, CONFIG_SECURITY_INFINIBAND,
>>      >>      >> CONFIG_SECURITY_PATH, CONFIG_INTEL_TXT,
>>      >>      >> CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR,
>>      >>      >>
>>     CONFIG_HARDENED_USERCOPY, CONFIG_HARDENED_USERCOPY_FALLBACK *are
>>      >>     NOT
>>      >>      >> set*.
>>      >>      >>
>>      >>      >> If HOOKs are "return 0 or empty void hooks", MAC is not
>>     enabled.
>>      >>      >> In runtime of fs-benchmarks,
>>     if CONFIG_DEFAULT_SECURITY_DAC=y,
>>      >> then
>>      >>      >> capability is enabled.
>>      >>      >>
>>      >>      >> Please correct me if I am wrong.
>>      >>      >>
>>      >>      >> For the first test, wo-sec is tested with:
>>      >>      >> # CONFIG_SECURITY_DMESG_RESTRICT is not set
>>      >>      >> # CONFIG_SECURITY is not set
>>      >>      >> # CONFIG_SECURITYFS is not set
>>      >>      >> # CONFIG_PAGE_TABLE_ISOLATION is not set
>>      >>      >> # CONFIG_INTEL_TXT is not set
>>      >>      >> CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
>>      >>      >> # CONFIG_HARDENED_USERCOPY is not set
>>      >>      >> CONFIG_FORTIFY_SOURCE=y
>>      >>      >> # CONFIG_STATIC_USERMODEHELPER is not set
>>      >>      >> CONFIG_DEFAULT_SECURITY_DAC=y
>>      >>      >>
>>      >>      >>
>>      >>      >> For the second test, w-sec is tested with:
>>      >>      >> # CONFIG_SECURITY_DMESG_RESTRICT is not set
>>      >>      >> CONFIG_SECURITY=y
>>      >>      >> CONFIG_SECURITYFS=y
>>      >>      >> # CONFIG_SECURITY_NETWORK is not set
>>      >>      >> CONFIG_PAGE_TABLE_ISOLATION=y
>>      >>      >> CONFIG_SECURITY_INFINIBAND=y
>>      >>      >> CONFIG_SECURITY_PATH=y
>>      >>      >> CONFIG_INTEL_TXT=y
>>      >>      >> CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
>>      >>      >> CONFIG_HARDENED_USERCOPY=y
>>      >>      >> CONFIG_HARDENED_USERCOPY_FALLBACK=y
>>      >>      >> # CONFIG_HARDENED_USERCOPY_PAGESPAN is not set
>>      >>      >> CONFIG_FORTIFY_SOURCE=y
>>      >>      >> # CONFIG_STATIC_USERMODEHELPER is not set
>>      >>      >> # CONFIG_SECURITY_SMACK is not set
>>      >>      >> # CONFIG_SECURITY_TOMOYO is not set
>>      >>      >> # CONFIG_SECURITY_APPARMOR is not set
>>      >>      >> # CONFIG_SECURITY_LOADPIN is not set
>>      >>      >> # CONFIG_SECURITY_YAMA is not set
>>      >>      >> # CONFIG_SECURITY_SAFESETID is not set
>>      >>      >> # CONFIG_INTEGRITY is not set
>>      >>      >> CONFIG_DEFAULT_SECURITY_DAC=y
>>      >>      >> #
>>      >>      >>
>>      >>
>>      >>
>>     
>> CONFIG_LSM="yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo" 
>>
>>
>>      >>
>>      >>
>>      >>      >>
>>      >>      >
>>      >>      > Your configs should only differ with respect to
>>     CONFIG_SECURITY*
>>      >>     if you
>>      >>      > want to evaluate LSM, SELinux, etc overheads.
>>      >> PAGE_TABLE_ISOLATION,
>>      >>      > INTEL_TXT, and HARDENED_USERCOPY are not relevant to LSM
>>     itself.
>>      >>      >
>>      >>      > Also, what benchmarks are you using?  Your own home-grown
>>     ones, a
>>      >>     set of
>>      >>      > open source standard benchmarks (if so, which ones?). 
>>     You should
>>      >>      > include both micro and macro benchmarks in your suite.
>>      >>      >
>>      >>      > How stable are your results?  What kind of variance /
>>     standard
>>      >>     deviation
>>      >>      > are you seeing?
>>      >>      >
>>      >>      > It is hard to get meaningful, reliable performance
>>     measurements
>>      >>     so going
>>      >>      > down this road is not to be done lightly.
>>      >>
>>      >>     Also, I note that you don't enable CONFIG_SECURITY_NETWORK
>>     above.
>>      >> That
>>      >>     means you aren't including the base LSM overhead for the
>>     networking
>>      >>     security hooks.  So if you then compare that against SELinux
>>     (which
>>      >>     requires CONFIG_SECURITY_NETWORK), you are going to end up
>>      >> attributing
>>      >>     the cost of both the LSM overhead and SELinux overhead all to
>>      >> SELinux.
>>      >>     If you truly want to isolate the base LSM overhead, you 
>> need to
>>      >> enable
>>      >>     all the hooks.
>>      >>
>>      >> I will give it a try for enabling CONFIG_SECURITY_NETWORK later
>>     this
>>      >> week, however I wonder if this would affect the test results
>>     that much.
>>      >> I am testing with LMBench 2.5 , with focusing on filesystem unit
>>      >> tests, however not network stack at this time.
>>      >> My understanding of why this result is so different from previous
>>      >> paper 20 years ago, is that the Bottleneck changes.
>>      >> As Chris was testing with 4 cores , each 700MHz CPU, and 128MB
>>     memory,
>>      >> with HDD (latency is about 20,000,000 ns for sequential read).
>>      >> The  Bottleneck of accessing files w/ MAC are mostly on I/O.
>>      >> However hardware setup is different now,  we have much larger and
>>      >> faster memory (better prefetching as well), with SSD (latency is
>>     about
>>      >> 49,000 ns for sequential read). , while CPU speed is not
>>     increasing as
>>      >> much as that of I/O.
>>      >> The Bottleneck of accessing files w/ MAC are mostly on CPU now.
>>      >
>>      > Don't know if lmbench is still a good benchmark and I recall
>>     struggling
>>      > with it even back then to get stable results.
>>      >
>>      > Could be bottleneck changes, could be the fact that your kernel
>>     config
>>      > changes aren't limited to CONFIG_SECURITY* (e.g. PTI introduces
>>      > non-trivial overheads), could be changes to LSM since that time
>>     (e.g.
>>      > stacking support, moving security_ calls out-of-line, more hooks,
>>     ...),
>>      > could be that running SELinux w/o policy is flooding the system 
>> logs
>>      > with warnings or other messages since it wasn't really designed
>>     to be
>>      > used that way past initialization.  Lots of options, can't tell
>>     without
>>      > more info on your details.
>>
>>     I'd think that these days one would leverage perf and/or lkp for 
>> Linux
>>     kernel performance measurements, not lmbench.
>>
>>
>> Thanks so much, I will give it a try for lkp and let you know how it 
>> goes.
>> Maybe later next week or this weekend, we should have the results.
> 
> Ok, please make sure your kernel configs are truly comparable (i.e. no 
> differences other than the right set of CONFIG_SECURITY* options), that 
> all of the same LSM hooks are enabled for comparing LSM-only versus 
> SELinux (i.e. CONFIG_SECURITY and CONFIG_SECURITY_NETWORK both enabled), 
> and consider using a distribution that actually supports SELinux out of 
> the box (e.g. Fedora) so that you can properly test SELinux with a 
> policy loaded in enforcing mode.  Similarly if you want to do the same 
> for AppArmor, except for it you'll need to enable CONFIG_SECURITY_PATH 
> as well for the pathname-based hooks and you'll want to use Ubuntu or 
> latest Debian to get a working policy.

One last point that I should have mentioned: you should likely run your 
benchmarks both under an unconfined profile/domain and under a confined 
profile/domain (the latter could be one that you define specifically for 
your benchmark that just allows it everything) at least for AppArmor. 
The reason is that AppArmor has an intrinsic concept of unconfined since 
it was designed for targeted enforcement and its hook functions directly 
test for the unconfined label early and skip permission checking in that 
case, so if you only collect data while running benchmarks unconfined, 
you won't see the real overheads imposed on a confined profile/domain. 
In contrast, SELinux has no intrinsic concept of unconfined since it was 
designed for full system confinement (as applied in "strict" 
configurations and in Android); if there is an unconfined domain, it is 
merely defined through policy (i.e. there must be explicit allow rules 
allowing it everything) and the hook functions invoke the same 
permission checking for both unconfined and confined domains alike.



More information about the Linux-security-module-archive mailing list