You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Perf: cache newline offsets in Source for O(log n) loc conversion
hbsPosFor() and charPosFor() were doing a fresh indexOf('\n') scan of
the source on every call, making each lookup O(lines_until_offset).
These are called once per AST node loc by the ASTv2 normalize pass, so
total cost was effectively O(n²) in template size.
CPU profile of a full precompile() showed hbsPosFor dominating at ~28%
of self-time, scattered across many call sites.
Fix: precompute an array of newline offsets on first use, binary-search
it for conversions. O(log n) per call, O(n) to build once per source.
Impact (Node 24, warmed JIT, full precompile()):
real-world template (1494 chars): 1.34ms -> 1.24ms
large template (3520 chars): 4.53ms -> 3.06ms (32%% faster)
The normalize phase specifically (ASTv1 -> ASTv2) drops from ~1.72ms
to ~0.43ms on the large template — a 4x speedup in that phase.
All tests pass.
0 commit comments