[GIT PULL] lsm/lsm-pr-20240911
Casey Schaufler
casey at schaufler-ca.com
Mon Sep 30 16:50:40 UTC 2024
On 9/30/2024 3:53 AM, Dr. Greg wrote:
> On Fri, Sep 27, 2024 at 09:33:19AM -0700, Casey Schaufler wrote:
>
> Good morning Casey, always good to get your reflections, we hope your
> week is starting well.
>
>> On 9/27/2024 1:58 AM, Dr. Greg wrote:
>>> From a security perspective, Linux will benefit from providing a
>>> better means to serve a middle ground where alternate security models
>>> and architectures can be implemented without building a kernel from
>>> scratch.
>> Ye Gads.
> That certainly dates both of us, the last time I heard that phrase it
> was from Thurston Howell the III....
>
>> One can create SELinux policy to support just about any security
>> model you can think of, although I was the first to decry its
>> complexity. Smack access rules can be configured to support a wide
>> variety of models, including Bell & LaPadula, Biba and rings of
>> trust. AppArmor is very useful for targeted application security
>> model enforcement. And then there's BPF.
>>
>> It seems to me that the problem isn't with the facilities provided
>> to support the implementation of new security models, it is with the
>> definition of those security modules. Or rather, the lack
>> thereof. The ancient Bell & LaPadula sensitivity model can be
>> implemented using Smack rules because it is sufficiently well
>> defined. If the end user can define their policy as clearly as B&P
>> does, its a slam dunk for any of the aforementioned LSMs.
> We certainly wouldn't choose to argue with any of this, given your
> repertoire in the field of mandatory access controls and security
> models.
>
> But therein lies the rub with respect to the implementation of system
> security.
>
> There are what, maybe 5-6 people in the world like yourself, that have
> the technical chops to translate the theoretical expressiveness you
> describe above into functional, let alone maintainable, security
> implementations?
Flattering, but a touch off the mark. In the 1980's there were at least
a dozen UNIX systems that implemented "B1", "B2" and/or "Compartmented
Mode Workstation" systems. Five of those even got NSA evaluation certificates.
There's plenty of expertise floating around today. The tools for doing
security analysis have moved from "grep" to "AI". The original TCB definition
for one system was done on paper, with a yellow highlighter, in a bar in
Cambridge.
Really, it's not that hard. It's messy and unpleasant and you learn things
you don't want to know. You find a lot of bugs and discover all kinds of
software behaviors that should never have been introduced. You become
quite unpopular with your peers with other interests. We were "lucky" in
the 1980's to have a US government executive order that drove security
into operating systems, giving us the rationale for making changes. What's
difficult today is justifying the effort.
> If there was the ability to practically implement just about any
> security model with SeLinux there would be no need for the LSM,
The SELinux team did in fact propose removing the LSM infrastructure
and making their code the official extended security mechanism.
> yet
> its existence has arisen, given the desire to support multiple
> security schemes. That alone would seem to suggest the lack of
> technical prowess that is required to translate theoretical
> expressiveness into practical implementations.
Nah, the technical prowess is there. The financial backing isn't.
Besides, it's a *lot* more fun to write a filesystem than an SELinux
policy (or set of Smack rules) for a distribution.
> A primary challenge to security is scale of skill.
>
> In the face of limited advanced security skills, we have hundreds of
> thousands of people around the world creating and modifying millions
> of workloads, on a daily basis.
Sure. Everyone uses their front door. Very few are locksmiths.
> I mentioned just recently, in a meeting with technical influencers
> here in the Great State of North Dakota, that we are never going to
> train our way out of this security problem.
>
> Cisco recognized this with network security and this fact was central
> to the concept of it's Application Centric Infrastructure (ACI). With
> respect to scale, ACI is based on the premise that the manageability
> of network security has to be an artifact of the development process.
>
> One of the motivations behind TSEM is to deliver that same concept to
> system security. The notion of allowing development teams to create a
> customized, bounded and mandatorily enforced security behavior,
> specific to a workload, as an extension of the development process.
SELinux + audit2allow. CONFIG_SECURITY_SMACK_BRINGUP.
These are helpful, but you're not going to get away without applying
some real brain power to your security model.
And that's my point. Generated security models are crap. The best
they can accomplish is to notice when a system changes behavior.
Sometimes that's a security problem, and sometimes it is the system
responding to anticipated changes, such as the phase of the moon.
The NSA once told me that "A system is secure if it does what it is
supposed to do, and nothing else". If you can't say in advance what
the system is supposed to do, you can't determine if it is secure.
> Another tool in the 'Secure By Design' toolbox. A concept that
> entities like NIST, DHS/CISA and particularly the insurance companies
> are going to force the industry to translate into practice,
> particularly in critical infrastructure systems.
>
> Have a good week.
>
> As always,
> Dr. Greg
>
> The Quixote Project - Flailing at the Travails of Cybersecurity
> https://github.com/Quixote-Project
>
More information about the Linux-security-module-archive
mailing list