Skip to content

Commit 773b24b

Browse files
melverwilldeacon
authored andcommitted
arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y
When enabling Clang's Context Analysis (aka. Thread Safety Analysis) on kernel/futex/core.o (see Peter's changes at [1]), in arm64 LTO builds we could see: | kernel/futex/core.c:982:1: warning: spinlock 'atomic ? __u.__val : q->lock_ptr' is still held at the end of function [-Wthread-safety-analysis] | 982 | } | | ^ | kernel/futex/core.c:976:2: note: spinlock acquired here | 976 | spin_lock(lock_ptr); | | ^ | kernel/futex/core.c:982:1: warning: expecting spinlock 'q->lock_ptr' to be held at the end of function [-Wthread-safety-analysis] | 982 | } | | ^ | kernel/futex/core.c:966:6: note: spinlock acquired here | 966 | void futex_q_lockptr_lock(struct futex_q *q) | | ^ | 2 warnings generated. Where we have: extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr); .. void futex_q_lockptr_lock(struct futex_q *q) { spinlock_t *lock_ptr; /* * See futex_unqueue() why lock_ptr can change. */ guard(rcu)(); retry: >> lock_ptr = READ_ONCE(q->lock_ptr); spin_lock(lock_ptr); ... } At the time of the above report (prior to removal of the 'atomic' flag), Clang Thread Safety Analysis's alias analysis resolved 'lock_ptr' to 'atomic ? __u.__val : q->lock_ptr' (now just '__u.__val'), and used this as the identity of the context lock given it cannot "see through" the inline assembly; however, we want 'q->lock_ptr' as the canonical context lock. While for code generation the compiler simplified to '__u.__val' for pointers (8 byte case -> 'atomic' was set), TSA's analysis (a) happens much earlier on the AST, and (b) would be the wrong deduction. Now that we've gotten rid of the 'atomic' ternary comparison, we can return '__u.__val' through a pointer that we initialize with '&x', but then update via a pointer-to-pointer. When READ_ONCE()'ing a context lock pointer, TSA's alias analysis does not invalidate the initial alias when updated through the pointer-to-pointer, and we make it effectively "see through" the __READ_ONCE(). Code generation is unchanged. Link: https://lkml.kernel.org/r/[email protected] [1] Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Cc: Peter Zijlstra <[email protected]> Tested-by: Boqun Feng <[email protected]> Reviewed-by: David Laight <[email protected]> Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Will Deacon <[email protected]>
1 parent abf1be6 commit 773b24b

1 file changed

Lines changed: 7 additions & 3 deletions

File tree

arch/arm64/include/asm/rwonce.h

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,12 @@
4242
*/
4343
#define __READ_ONCE(x) \
4444
({ \
45-
typeof(&(x)) __x = &(x); \
46-
union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \
45+
auto __x = &(x); \
46+
auto __ret = (__rwonce_typeof_unqual(*__x) *)__x; \
47+
/* Hides alias reassignment from Clang's -Wthread-safety. */ \
48+
auto __retp = &__ret; \
49+
union { typeof(*__ret) __val; char __c[1]; } __u; \
50+
*__retp = &__u.__val; \
4751
switch (sizeof(x)) { \
4852
case 1: \
4953
asm volatile(__LOAD_RCPC(b, %w0, %1) \
@@ -68,7 +72,7 @@
6872
default: \
6973
__u.__val = *(volatile typeof(*__x) *)__x; \
7074
} \
71-
__u.__val; \
75+
*__ret; \
7276
})
7377

7478
#endif /* !BUILD_VDSO */

0 commit comments

Comments
 (0)