Skip to content

Commit 697f514

Browse files
Yihan Dingl0kod
authored andcommitted
landlock: Clean up interrupted thread logic in TSYNC
In landlock_restrict_sibling_threads(), when the calling thread is interrupted while waiting for sibling threads to prepare, it executes a recovery path. Previously, this path included a wait_for_completion() call on all_prepared to prevent a Use-After-Free of the local shared_ctx. However, this wait is redundant. Exiting the main do-while loop already leads to a bottom cleanup section that unconditionally waits for all_finished. Therefore, replacing the wait with a simple break is safe, prevents UAF, and correctly unblocks the remaining task_works. Clean up the error path by breaking the loop and updating the surrounding comments to accurately reflect the state machine. Suggested-by: Günther Noack <[email protected]> Signed-off-by: Yihan Ding <[email protected]> Tested-by: Günther Noack <[email protected]> Reviewed-by: Günther Noack <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mickaël Salaün <[email protected]>
1 parent ff88df6 commit 697f514

1 file changed

Lines changed: 13 additions & 7 deletions

File tree

security/landlock/tsync.c

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -575,24 +575,30 @@ int landlock_restrict_sibling_threads(const struct cred *old_cred,
575575
-ERESTARTNOINTR);
576576

577577
/*
578-
* Cancel task works for tasks that did not start running yet,
579-
* and decrement all_prepared and num_unfinished accordingly.
578+
* Opportunistic improvement: try to cancel task
579+
* works for tasks that did not start running
580+
* yet. We do not have a guarantee that it
581+
* cancels any of the enqueued task works
582+
* because task_work_run() might already have
583+
* dequeued them.
580584
*/
581585
cancel_tsync_works(&works, &shared_ctx);
582586

583587
/*
584-
* The remaining task works have started running, so waiting for
585-
* their completion will finish.
588+
* Break the loop with error. The cleanup code
589+
* after the loop unblocks the remaining
590+
* task_works.
586591
*/
587-
wait_for_completion(&shared_ctx.all_prepared);
592+
break;
588593
}
589594
}
590595
} while (found_more_threads &&
591596
!atomic_read(&shared_ctx.preparation_error));
592597

593598
/*
594-
* We now have all sibling threads blocking and in "prepared" state in the
595-
* task work. Ask all threads to commit.
599+
* We now have either (a) all sibling threads blocking and in "prepared"
600+
* state in the task work, or (b) the preparation error is set. Ask all
601+
* threads to commit (or abort).
596602
*/
597603
complete_all(&shared_ctx.ready_to_commit);
598604

0 commit comments

Comments
 (0)