[PATCH v1 1/2] landlock: Fully release unused TSYNC work entries
Günther Noack
gnoack at google.com
Mon Feb 16 15:25:53 UTC 2026
Hello!
On Mon, Feb 16, 2026 at 03:26:38PM +0100, Mickaël Salaün wrote:
> If task_work_add() failed, ctx->task is put but the tsync_works struct
> is not reset to its previous state. The first consequence is that the
> kernel allocates memory for dying threads, which could lead to
> user-accounted memory exhaustion (not very useful nor specific to this
> case). The second consequence is that task_work_cancel(), called by
> cancel_tsync_works(), can dereference a NULL task pointer.
I think it is very difficult to get into this situation, but this is
obviously not an excuse - if we already do the error handling, we
should do it right. 👍
>
> Fix this issues by keeping a consistent works->size wrt the added task
> work. For completeness, clean up ctx->shared_ctx dangling pointer as
> well.
>
> As a safeguard, add a pointer check to cancel_tsync_works() and update
> tsync_works_release() accordingly.
>
> Cc: Günther Noack <gnoack at google.com>
> Cc: Jann Horn <jannh at google.com>
> Signed-off-by: Mickaël Salaün <mic at digikod.net>
> ---
> security/landlock/tsync.c | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/security/landlock/tsync.c b/security/landlock/tsync.c
> index 0d2b9c646030..8e9b8ed7d53c 100644
> --- a/security/landlock/tsync.c
> +++ b/security/landlock/tsync.c
> @@ -276,7 +276,7 @@ static void tsync_works_release(struct tsync_works *s)
> size_t i;
>
> for (i = 0; i < s->size; i++) {
> - if (!s->works[i]->task)
> + if (WARN_ON_ONCE(!s->works[i]->task))
Is this a condition we should warn on? It is very unlikely, but it
can technically happen that a thread exits at the same time as TSYNC
and happens to hit that narrow race condition window. As long as it
happens only sporadically, I don't think there is anything wrong (and
in particular, it's not actionable for the user - I don't think there
is a way to fix it if that warning appears?)
> continue;
>
> put_task_struct(s->works[i]->task);
> @@ -389,6 +389,15 @@ static bool schedule_task_work(struct tsync_works *works,
> */
> put_task_struct(ctx->task);
> ctx->task = NULL;
> + ctx->shared_ctx = NULL;
> +
> + /*
> + * Cancel the tsync_works_provide() change to recycle the reserved
> + * memory for the next thread, if any. This also ensures that
> + * cancel_tsync_works() and tsync_works_release() do not see any
> + * NULL task pointers.
> + */
> + works->size--;
Looks good.
[Optional code arrangement remarks:
I would recommend to put that logic in a helper function
"tsync_works_return(struct tsync_works *s, struct tsync_work *)", to
be in line with the existing implementation where the manipulation of
struct tsync_works is encapsulated in the "tsync_*" helper functions.
The scope of that function would be to do the inverse of
"tsync_works_provide()" -- putting the task_struct, decreasing
works->size, and then, to be safe, also clearing the contents of the
tsync_work struct (although that is strictly speaking not required if
we decrease the size, I think).
The only unusual thing about the tsync_works_return() function would
be that it is only OK to return the very last tsync_work struct which
was returned from tsync_works_provide().
]
It's an improvement either way though; If you want to prioritize
fixing this and don't want to extract the extra function now, we can
also look into it in a follow-up. From a functional standpoint, I
think your code works as well.
>
> atomic_dec(&shared_ctx->num_preparing);
> atomic_dec(&shared_ctx->num_unfinished);
> @@ -412,6 +421,9 @@ static void cancel_tsync_works(struct tsync_works *works,
> int i;
>
> for (i = 0; i < works->size; i++) {
> + if (WARN_ON_ONCE(!works->works[i]->task))
> + continue;
> +
Well spotted!
> if (!task_work_cancel(works->works[i]->task,
> &works->works[i]->work))
> continue;
> --
> 2.53.0
>
Reviewed-by: Günther Noack <gnoack at google.com>
Thanks for having another closer look at this!
—Günther
More information about the Linux-security-module-archive
mailing list