[PATCH 1/2] apparmor: Use a memory pool instead per-CPU caches

Sebastian Andrzej Siewior bigeasy at linutronix.de
Thu May 2 10:51:58 UTC 2019


On 2019-05-01 14:29:17 [-0700], John Johansen wrote:
> On 4/30/19 7:47 AM, Sebastian Andrzej Siewior wrote:
> > On 2019-04-28 16:56:59 [-0700], John Johansen wrote:
> >> So digging into why the history of the per cpu buffers in apparmor.
> >> We used to do buffer allocations via kmalloc and there were a few reasons
> >> for the switch 
> >>
> >> * speed/lockless: speaks for it self, mediation is already slow enough
> > 
> > it is shared among all CPUs but it is a small/quick operation to
> > add/return a buffer.
> > 
> I wouldn't exactly call taking a lock speedy. Getting an available buffer
> or returning it is indeed quick. The allocation fall back not so much.

Based on testing it happens only in the beginning. We could also start
with 2,3,4 pre allocated buffers or so.
My testing was most likely limited and I did not exceed two.

> >> * some buffer allocations had to be done with GFP_ATOMIC, making them
> >>   more likely to fail. Since we fail closed that means failure would
> >>   block access. This actually became a serious problem in a couple
> >>   places. Switching to per cpu buffers and blocking pre-empt was
> >>   the solution.
> > 
> > GFP_KERNEL is allowed to use IO/SWAP and ATOMIC has emergency pools. The
> > new approach won't return a NULL pointer, simply spin to either allocate
> > new memory or get one which was just returned.
> > 
> 
> yeah, I am not really a fan of a potential infinite loop trying to allocate
> memory. It may be worth retrying once or twice but potentially infinitely
> spinning on failed allocation really isn't acceptable.

It shouldn't spin infinitely because even if kmalloc() does not return
any memory, one of the other CPUs should return their buffer at some
point. However, if you don't like it I could add two retries and return
NULL + fixup callers. On the other hand if the other CPUs BUG() with the
buffers then yes, we may spin.
So limited retries it is?

> >> * in heavy use cases we would see a lot of buffers being allocated
> >>   and freed. Which resulted in locking slow downs and also buffer
> >>   allocation failures. So having the buffers preallocated allowed us
> >>   to bound this potential problem.
> >>
> >> This was all 6 years ago. Going to a mem pool certainly could help,
> >> reduce the memory foot print, and would definitely help with
> >> preempt/real time kernels.
> >>
> >> A big concern with this patchset is reverting back to GFP_KERNEL
> >> for everything. We definitely were getting failures due to allocations
> >> in atomic context. There have been lots of changes in the kernel over
> >> the last six years so it possible these cases don't exist anymore. I
> >> went through and built some kernels with this patchset and have run
> >> through some testing without tripping that problem but I don't think
> >> it has seen enough testing yet.
> > 
> > Do you want apply #1 now and #2 later? I audited the ATOMIC->KERNEL
> > changes manually and I didn't see any atomic context. It looked like the
> > only reason for ATOMIC was the preempt_disable() due to the memory pool.
> > 
> 
> Indeed most if not all (I'd have to dig to be sure) the changes made in #2
> were original done because of the move to the per cpu buffers and blocking
> pre-emption.
> 
> The problem was with the allocation of the buffer needing to be GFP_ATOMIC
> some times.
yup, that is what I saw, too.

Sebastian



More information about the Linux-security-module-archive mailing list