[PATCH v17] exec: Fix dead-lock in de_thread with ptrace_attach

Oleg Nesterov oleg at redhat.com
Sun Nov 23 18:32:21 UTC 2025


Hi Bernd,

sorry for delay, I am on PTO, didn't read emails this week...

On 11/17, Bernd Edlinger wrote:
>
> On 11/17/25 16:01, Oleg Nesterov wrote:
> > On 11/17, Bernd Edlinger wrote:
> >>
> >> On 11/11/25 10:21, Christian Brauner wrote:
> >>> On Wed, Nov 05, 2025 at 03:32:10PM +0100, Oleg Nesterov wrote:
> >>
> >>>> But this is minor. Why do we need "bool unsafe_execve_in_progress" ?
> >>>> If this patch is correct, de_thread() can drop/reacquire cred_guard_mutex
> >>>> unconditionally.
> >>>>
> >>
> >> I would not like to drop the mutex when no absolutely necessary for performance reasons.
> >
> > OK, I won't insist... But I don't really understand how this can help to
> > improve the performance. If nothing else, this adds another for_other_threads()
> > loop.
> >
>
> If no dead-lock is possible it is better to complete the de_thread without
> releasing the mutex.  For the debugger it is also the better experience,
> no matter when the ptrace_attack happens it will succeed rather quickly either
> before the execve or after the execve.

I still disagree, I still don't understand the "performance reasons", but since I can't
convince you I won't really argue.

> >>>>> +	if (unlikely(unsafe_execve_in_progress)) {
> >>>>> +		spin_unlock_irq(lock);
> >>>>> +		sig->exec_bprm = bprm;
> >>>>> +		mutex_unlock(&sig->cred_guard_mutex);
> >>>>> +		spin_lock_irq(lock);
> >>>>
> >>>> I don't think spin_unlock_irq() + spin_lock_irq() makes any sense...
> >>>>
> >>
> >> Since the spin lock was acquired while holding the mutex, both should be
> >> unlocked in reverse sequence and the spin lock re-acquired after releasing
> >> the mutex.
> >
> > Why?
> >
>
> It is generally more safe when each thread acquires its mutexes in order and
> releases them in reverse order.
> Consider this:
> Thread A:
> holds spin_lock_irq(siglock);
> does mutes_unlock(cred_guard_mutex); with irq disabled.
> task switch happens to Thread B which has irq enabled.
> and is waiting for cred_guard_mutex.
> Thrad B:
> does mutex_lock(cred_guard_mutex);
> but is interrupted this point and the interrupt handler I executes
> now iterrupt handler I wants to take siglock and is blocked,
> because the system one single CPU core.

I don't follow. Do you mean PREEMPT_RT ?

If yes. In this case spin_lock_irq() is rt_spin_lock() which doesn't disable irqs,
it does rt_lock_lock() (takes rt_mutex) + migrate_disable().

I do think that spin/mutex/whatever_unlock() is always safe. In any order, and
regardless of RT.

> > And just in case... Lets look at this code
> >
> > 	+                               rcu_assign_pointer(task->real_cred, bprm->cred);
> > 	+                               task->mm = bprm->mm;
> > 	+                               retval = __ptrace_may_access(task, PTRACE_MODE_ATTACH_REALCREDS);
> > 	+                               rcu_assign_pointer(task->real_cred, old_cred);
> > 	+                               task->mm = old_mm;
> >
> > again.
> >
> > This is mostly theoretical, but what if begin_new_exec() fails after de_thread()
> > and before exec_mmap() and/or commit_creds(bprm->cred) ? In this case the execing
> > thread will report SIGSEGV to debugger which can (say) read old_mm.
> >
> > No?
> >
>
> Yes, and that is the reason why the debugger has to prove the possession of access rights
> to the process before the execve which are necessary in case exeve fails, yes the debugger
> may inspect the result, and as well the debugger's access rights must be also sufficient
> to ptrace the process after execve succeeds, moreover the debugged process shall stop
> right at the first instruction where the new process starts.

Not sure I understand... OK, I see that you sent V18, and in this version ptrace_attach()
calls __ptrace_may_access() twice, so IIUC ptrace_attach() can only succeed if the debugger
has rights to trace the execing thread both before and after exec...

Oleg.




More information about the Linux-security-module-archive mailing list