Skip to content

Commit 3bb83c9

Browse files
fthainakpm00
authored andcommitted
bpf: explicitly align bpf_res_spin_lock
Patch series "Align atomic storage", v7. This series adds the __aligned attribute to atomic_t and atomic64_t definitions in include/linux and include/asm-generic (respectively) to get natural alignment of both types on csky, m68k, microblaze, nios2, openrisc and sh. This series also adds Kconfig options to enable a new run-time warning to help reveal misaligned atomic accesses on platforms which don't trap that. The performance impact is expected to vary across platforms and workloads. The measurements I made on m68k show that some workloads run faster and others slower. This patch (of 4): Align bpf_res_spin_lock to avoid a BUILD_BUG_ON() when the alignment changes, as it will do on m68k when, in a subsequent patch, the minimum alignment of the atomic_t member of struct rqspinlock gets increased from 2 to 4. Drop the BUILD_BUG_ON() as it becomes redundant. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/8a83876b07d1feacc024521e44059ae89abbb1ea.1768281748.git.fthain@linux-m68k.org Signed-off-by: Finn Thain <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Andrii Nakryiko <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Boqun Feng <[email protected]> Cc: "Borislav Petkov (AMD)" <[email protected]> Cc: Daniel Borkman <[email protected]> Cc: Dinh Nguyen <[email protected]> Cc: Eduard Zingerman <[email protected]> Cc: Gary Guo <[email protected]> Cc: Guo Ren <[email protected]> Cc: Hao Luo <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Fastabend <[email protected]> Cc: John Paul Adrian Glaubitz <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: KP Singh <[email protected]> Cc: Marc Rutland <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rich Felker <[email protected]> Cc: Sasha Levin (Microsoft) <[email protected]> Cc: Song Liu <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Stanislav Fomichev <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yonghong Song <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Dave Hansen <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 499f86d commit 3bb83c9

2 files changed

Lines changed: 1 addition & 2 deletions

File tree

include/asm-generic/rqspinlock.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ struct rqspinlock {
2828
*/
2929
struct bpf_res_spin_lock {
3030
u32 val;
31-
};
31+
} __aligned(__alignof__(struct rqspinlock));
3232

3333
struct qspinlock;
3434
#ifdef CONFIG_QUEUED_SPINLOCKS

kernel/bpf/rqspinlock.c

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -694,7 +694,6 @@ __bpf_kfunc int bpf_res_spin_lock(struct bpf_res_spin_lock *lock)
694694
int ret;
695695

696696
BUILD_BUG_ON(sizeof(rqspinlock_t) != sizeof(struct bpf_res_spin_lock));
697-
BUILD_BUG_ON(__alignof__(rqspinlock_t) != __alignof__(struct bpf_res_spin_lock));
698697

699698
preempt_disable();
700699
ret = res_spin_lock((rqspinlock_t *)lock);

0 commit comments

Comments
 (0)