[PATCH 1/2] apparmor: Use a memory pool instead per-CPU caches

Tetsuo Handa penguin-kernel at I-love.SAKURA.ne.jp
Thu May 2 13:17:35 UTC 2019


On 2019/05/02 19:51, Sebastian Andrzej Siewior wrote:
>>>> * some buffer allocations had to be done with GFP_ATOMIC, making them
>>>>   more likely to fail. Since we fail closed that means failure would
>>>>   block access. This actually became a serious problem in a couple
>>>>   places. Switching to per cpu buffers and blocking pre-empt was
>>>>   the solution.
>>>
>>> GFP_KERNEL is allowed to use IO/SWAP and ATOMIC has emergency pools. The
>>> new approach won't return a NULL pointer, simply spin to either allocate
>>> new memory or get one which was just returned.
>>>
>>
>> yeah, I am not really a fan of a potential infinite loop trying to allocate
>> memory. It may be worth retrying once or twice but potentially infinitely
>> spinning on failed allocation really isn't acceptable.
> 
> It shouldn't spin infinitely because even if kmalloc() does not return
> any memory, one of the other CPUs should return their buffer at some
> point. However, if you don't like it I could add two retries and return
> NULL + fixup callers. On the other hand if the other CPUs BUG() with the
> buffers then yes, we may spin.
> So limited retries it is?

There is 'The "too small to fail" memory-allocation rule'
( https://lwn.net/Articles/627419/ ) where GFP_KERNEL allocation for
size <= 32768 bytes never fails unless current thread was killed by
the OOM killer. This means that kmalloc() in

+char *aa_get_buffer(void)
+{
+	union aa_buffer *aa_buf;
+
+try_again:
+	spin_lock(&aa_buffers_lock);
+	if (!list_empty(&aa_global_buffers)) {
+		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
+					  list);
+		list_del(&aa_buf->list);
+		spin_unlock(&aa_buffers_lock);
+		return &aa_buf->buffer[0];
+	}
+	spin_unlock(&aa_buffers_lock);
+
+	aa_buf = kmalloc(aa_g_path_max, GFP_KERNEL);
+	if (WARN_ON_ONCE(!aa_buf))
+		goto try_again;
+	return &aa_buf->buffer[0];
+}

can't return NULL unless current thread was killed by the OOM killer
if aa_g_path_max <= 32768. On the other hand, if aa_g_path_max > 32768,
this allocation can easily fail, and retrying forever is very bad.

If current thread was killed by the OOM killer, current thread should be
able to bail out without retrying. If allocation can never succeed (e.g.
aa_g_path_max == 1073741824 was specified), we must bail out.



By the way, did you really test your patch?

> @@ -1399,6 +1404,7 @@ static int param_set_aauint(const char *val, const struct kernel_param *kp)
>  		return -EPERM;
>  
>  	error = param_set_uint(val, kp);
> +	aa_g_path_max = min_t(uint32_t, aa_g_path_max, sizeof(union aa_buffer));

I think that this will guarantee that aa_g_path_max <= sizeof(struct list_head)
which is too small to succeed. :-(

>  	pr_info("AppArmor: buffer size set to %d bytes\n", aa_g_path_max);
>  
>  	return error;



More information about the Linux-security-module-archive mailing list