[PATCH v4 2/14] Add TSEM specific documentation.

Casey Schaufler casey at schaufler-ca.com
Thu Feb 27 16:47:43 UTC 2025


On 2/27/2025 4:12 AM, Dr. Greg wrote:
> On Tue, Feb 25, 2025 at 07:48:31AM -0800, Casey Schaufler wrote:
>
> Good morning, I hope this note finds the week going well for everyone.
>
>> On 2/25/2025 4:01 AM, Dr. Greg wrote:
>>> On Tue, Jan 28, 2025 at 05:23:52PM -0500, Paul Moore wrote:
>>>
>>> For the record, further documentation of our replies to TSEM technical
>>> issues.
>>>
>>> ...
>>>
>>> Further, TSEM is formulated on the premise that software teams,
>>> as a by product of CI/CD automation and testing, can develop precise
>>> descriptions of the security behavior of their workloads.
>> I've said it before, and I'll say it again. This premise is
>> hopelessly naive. If it was workable you'd be able to use SELinux
>> and audit2allow to create perfect security, and it would have been
>> done 15 years ago.  The whole idea that you can glean what a
>> software system is *supposed* to do from what it *does* flies
>> completely in the face of basic security principles.
> You view our work as hopelessly naive because you, and perhaps others,
> view it through a 45+ year old lens of classic subject/object
> mandatory controls that possess only limited dimensionality.

I view your work as hopelessly naive because I've seen the basic idea
fail spectacularly so many times. That includes things I have written,
such as the Datastate LSM.

... and don't play the stodgy old fart card on me. I've been working
on making the LSM more available to new security models for years.

> We view it through a lens of 10+ years of developing new multi-scale
> methods for modeling alpha2-adrenergic receptor antagonists... :-)

Which is relevant how?

> We don't offer this observation just in jest.  If people don't
> understand what we mean by this, they should consider the impact that
> Singular Value Decomposition methods had when they were brought over
> from engineering and applied to machine learning and classification.
>
> A quote from John von Neumann, circa 1949, would seem appropriate:
>
> "It would appear that we have reached the limits of what is
>  possible to achieve with computer technology, although one should be
>  careful with such statements, as they tend to sound pretty silly in 5
>  years."

New good ideas can shatter old conceptions. Old bad ideas with a fresh
coat of paint and impressive new terminology fail to impress.


> If anyone spends time understanding the generative functions that we
> are using, particularly the task identity model, they will find that
> the coefficients that define the permitted behaviors have far more
> specificity, with respect to classifying what a system is *supposed*
> to do, than the two, possibly three dimensions of classic
> subject/object controls.

Squirrels are funny rodents. If you model their behavior you will declare
that they are herbivores. In California (where many strange and wonderful
things happen) squirrels have begun to eat voles, a very carnivorous
behavior. If you believe in modeling as a way to identify correct behavior,
you have to say that these furry creatures that eat voles are not squirrels.
If, on the other hand, you look at the environment they live in you can see
that the loss of native habitat has reduced the available fat calories to
the point where survival requires changed behavior. They're still squirrels,
and no amount of modeling is going to change that.


> More specifically to the issues you raise.
>
> Your SeLinux/audit2allow analogy is flawed and isn't a relevant
> comparison to what we are implementing.  audit2allow is incapable of
> defining a closed set of allowed security behaviors that are
> *supposed* to be exhibited by a workload.
>
> The use of audit2allow only generates what can be considered as
> possible permitted exceptions to a security model, after the model has
> failed and hopefully before people have simply turned off the
> infrastructure in frustration because they needed a working system.

It's a poor workman who blames his tools. Why haven't audit and audit2allow
been enhanced to provide the information necessary to create your analysis?
I suggest that it's because the value has been recognized as unimportant.

> Unit testing of a workload under TSEM produces a closed set of high
> resolution permitted behaviors generated by the normal functioning of
> that workload, in other words all of the security behaviors that are
> exibited when the workload is doing what it is *supposed* to do.  TSEM
> operates under default deny criteria, so if workload testing is
> insufficient in coverage, any unexpressed behaviors will be denied,
> thus blocking or alerting on any undesired security behaviors.

And how is that different from running SELinux in permissive mode?

> I believe our team is unique in these conversations in being the only
> group that has ever compiled a kernel with TSEM enabled and actually
> spent time running and testing its performance with the trust
> orchestrators and modeling tools we provide.  That includes unit
> testing of workloads and then running the models developed from those
> tests against kernels and application stacks with documented
> vulnerabilities.  To determine whether the models can detect
> deviations generated by an exploit of those vulnerabilities, from what
> the workload is *supposed* to be doing.
>
> If anyone is interested in building and testing TSEM and can
> demonstrate that security behaviors, undesired from its training set,
> can escape detection we would certainly embrace an example so we can
> review why it is occurring and integrate it into our testing and
> development framework.

Sigh. You keep coming back to a train of logic that is based on a flawed
assumption. If you accept that observed behavior describes intended
behavior the arguments that follow may be convincing. I, for one, do not
accept that assumption.

> FWIW, a final thought for those reading along at home.
>
> TSEM is not as much an LSM as it is a generic framework for driving
> mathematical models over the basis set of information provided by the
> LSM hooks.
>
> All of the above starts the conversation on deterministic models, we
> can begin argueing about the relevancy of probabilistic and
> inferential models at everyone's convenience.  The latter two of which
> will probably drive how the industry does security for the next 45
> years.
>
> Have a good day.
>
> As always,
> Dr. Greg
>
> The Quixote Project - Flailing at the Travails of Cybersecurity
>               https://github.com/Quixote-Project



More information about the Linux-security-module-archive mailing list