Unprivileged filesystem mounts

Theodore Ts'o tytso at mit.edu
Tue Mar 18 22:11:28 UTC 2025


On Tue, Mar 11, 2025 at 04:10:42PM -0400, Demi Marie Obenour wrote:
> 
> Why is it not possible to provide that guarantee?  I'm not concerned
> about infinite loops or deadlocks.  Is there a reason it is not possible
> to prevent memory corruption?

Companies and users are willing to pay to improve performance for file
systems.  q(For example, we have been working for Cloud services that
are interested in improving the performance of their first party
database products using the fact with cloud emulated block devices, we
can guarantee that 16k write won't be torn, and this can resul;t in
significant database performance.)

However, I have *yet* to see any company willing to invest in
hardening file systems against maliciously modified file system
images.  We can debate how much it might cost it to harden a file
system, but given how much companies are willing to pay --- zero ---
it's mostly an academic question.

In addition, if someone made a file system which is guaranteed to be
safe, but it had massive performance regressions relative other file
systems --- it's unclear how many users or system administrators would
use it.  And we've seen that --- there are known mitigations for CPU
cache attacks which are so expensive, that companies or end users have
chosen not to enable them.  Yes, there are some security folks who
believe that security is the most important thing, uber alles.
Unfortunately, those people tend not to be the ones writing the checks
or authorizing hiring budgets.

That being said, if someone asked me if it was best way to invest
software development dollars --- I'd say no.  Don't get me wrong, if
someone were to give me some minions tasked to harden ext4, I know how
I could keep them busy and productive.  But a more cost effective way
of addressing the "untrusted file sytem problem" would be:

(a) Run a forced fsck to check the file system for inconsistency
before letting the file system be mounted.

(b) Mount the file system in a virtual machine, and then make it
available to the host using something like 9pfs.  9pfs is very simple
file system which is easy to validate, and it's a strategy used by
gVisor's file system gopher.

These two approaches are complementary, with (a) being easier, and (b)
probably a bit more robust from a security perspective, but it a bit
more work --- with both providing a layered approach.

> > In this situation, the choice of what to do *must* fall to the user,
> > but the argument for "filesystem corruption is a CVE-worthy bug" is
> > that the choice has been taken away from the user. That's what I'm
> > saying needs to change - the choice needs to be returned to the
> > user...

Users can alwayus do stupid things.  For example, they could download
a random binary from the web, then execute it.  We've seen very
popular software which is instaled via "curl <URL> | bash".  Should we
therefore call bash be a CVE-vulnerability?

Realistically, this is probably a far bigger vulnerability if we're
talking about stupid user tricks.  ("But.... but... but... users need
to be able to install software" --- we can't stop them from piping the
output of curl into bash.)  Which is another reason why I don't really
blame the VP's that are making funding decisions; it's not clear that
the ROI of funding file system security hardening is the best way to
spend a company's dollars.  Remember, Zuckerburg has been quoted as
saying that he's laying off engineers so his company can buy more
GPU's, we know that funding is not infinite.  Every company is making
ROI decisions; you might not agree with the decisions, but trust me,
they're making them.

But if some company would like to invest software engineering effort
in addition features or perform security hardening --- they should
contact me, and I'd be happy to chat.  We have weekly ext4 video
conference calls, and I'm happy to collaborate with companies have a
business interest in seeing some feature get pursued.  There *have*
been some that are security related --- fscrypt and fsverity were both
implemented for ext4 first, in support of Android and ChromeOS's
security use cases.  But in practice this has been the exception, and
not the rule.

> Not automounting filesystems on hotplug is a _part_ of the solution.
> It cannot be the _entire_ solution.  Users sometimes need to be able to
> interact with untrusted filesystem images with a reasonable speed.

Running fsck on a file system *before* automounting file systems would
be a pretty decent start towards a solution.  Is it perfect?  No.  But
it would provide a huge amount of protection.

Note that this won't help if you have a malicious hardware that
*pretends* to be a USB storage device, but which doens't behave a like
a honest storage device.  For example, reading a particular sector
with one data at time T, and a different data at time T+X, with no
intervening writes.  There is no real defense to this attack, since
there is no way that you can authentiate the external storage device;
you could have a registry of USB vendor and model id's, but a device
can always lie about its id numbers.

If you are worried about this kind of attack, the only thing you can
do is to prevent external USB devices from being attached.  This *is*
something that you can do with Chrome and Android enterprise security
policies, and, I've talked to a bank's senior I/T leader that chose to
put epoxy in their desktop, to mitigate aginst a whole *class* of USB
security attacks.

Like everything else, security and usability and performance and costs
are all engineering tradeoffs.  So what works for one use case and
threat model won't be optimal for another, just as fscrypt works well
for Android and ChromeOS, but it doesn't necessarily work well for
other use cases (where I might recommed dm-crypt instead).

Cheers,

					- Ted




More information about the Linux-security-module-archive mailing list