Skip to content

Commit 2bbb320

Browse files
rostedtgregkh
authored andcommitted
Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"
commit adab66b upstream. It was believed that metag was the only architecture that required the ring buffer to keep 8 byte words aligned on 8 byte architectures, and with its removal, it was assumed that the ring buffer code did not need to handle this case. It appears that sparc64 also requires this. The following was reported on a sparc64 boot up: kernel: futex hash table entries: 65536 (order: 9, 4194304 bytes, linear) kernel: Running postponed tracer tests: kernel: Testing tracer function: kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140 kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140 kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140 kernel: Kernel unaligned access at TPC[552a24] trace_function+0x44/0x140 kernel: Kernel unaligned access at TPC[552a20] trace_function+0x40/0x140 kernel: PASSED Need to put back the 64BIT aligned code for the ring buffer. Link: https://lore.kernel.org/r/CADxRZqzXQRYgKc=y-KV=S_yHL+Y8Ay2mh5ezeZUnpRvg+syWKw@mail.gmail.com Cc: [email protected] Fixes: 86b3de6 ("ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS") Reported-by: Anatoly Pugachev <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent 783c5d4 commit 2bbb320

2 files changed

Lines changed: 29 additions & 4 deletions

File tree

arch/Kconfig

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -143,6 +143,22 @@ config UPROBES
143143
managed by the kernel and kept transparent to the probed
144144
application. )
145145

146+
config HAVE_64BIT_ALIGNED_ACCESS
147+
def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
148+
help
149+
Some architectures require 64 bit accesses to be 64 bit
150+
aligned, which also requires structs containing 64 bit values
151+
to be 64 bit aligned too. This includes some 32 bit
152+
architectures which can do 64 bit accesses, as well as 64 bit
153+
architectures without unaligned access.
154+
155+
This symbol should be selected by an architecture if 64 bit
156+
accesses are required to be 64 bit aligned in this way even
157+
though it is not a 64 bit architecture.
158+
159+
See Documentation/unaligned-memory-access.txt for more
160+
information on the topic of unaligned memory accesses.
161+
146162
config HAVE_EFFICIENT_UNALIGNED_ACCESS
147163
bool
148164
help

kernel/trace/ring_buffer.c

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,16 @@ int ring_buffer_print_entry_header(struct trace_seq *s)
129129
#define RB_ALIGNMENT 4U
130130
#define RB_MAX_SMALL_DATA (RB_ALIGNMENT * RINGBUF_TYPE_DATA_TYPE_LEN_MAX)
131131
#define RB_EVNT_MIN_SIZE 8U /* two 32bit words */
132-
#define RB_ALIGN_DATA __aligned(RB_ALIGNMENT)
132+
133+
#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS
134+
# define RB_FORCE_8BYTE_ALIGNMENT 0
135+
# define RB_ARCH_ALIGNMENT RB_ALIGNMENT
136+
#else
137+
# define RB_FORCE_8BYTE_ALIGNMENT 1
138+
# define RB_ARCH_ALIGNMENT 8U
139+
#endif
140+
141+
#define RB_ALIGN_DATA __aligned(RB_ARCH_ALIGNMENT)
133142

134143
/* define RINGBUF_TYPE_DATA for 'case RINGBUF_TYPE_DATA:' */
135144
#define RINGBUF_TYPE_DATA 0 ... RINGBUF_TYPE_DATA_TYPE_LEN_MAX
@@ -2719,7 +2728,7 @@ rb_update_event(struct ring_buffer_per_cpu *cpu_buffer,
27192728

27202729
event->time_delta = delta;
27212730
length -= RB_EVNT_HDR_SIZE;
2722-
if (length > RB_MAX_SMALL_DATA) {
2731+
if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT) {
27232732
event->type_len = 0;
27242733
event->array[0] = length;
27252734
} else
@@ -2734,11 +2743,11 @@ static unsigned rb_calculate_event_length(unsigned length)
27342743
if (!length)
27352744
length++;
27362745

2737-
if (length > RB_MAX_SMALL_DATA)
2746+
if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT)
27382747
length += sizeof(event.array[0]);
27392748

27402749
length += RB_EVNT_HDR_SIZE;
2741-
length = ALIGN(length, RB_ALIGNMENT);
2750+
length = ALIGN(length, RB_ARCH_ALIGNMENT);
27422751

27432752
/*
27442753
* In case the time delta is larger than the 27 bits for it

0 commit comments

Comments
 (0)