⚡ Bolt: Unroll literal writing loop for performance#238
Conversation
Unrolled literal writing loops in `compress_greedy_block` and `write_dynamic_block_with_sequences` to process 4 literals per iteration instead of 2. This reduces loop overhead and improves performance for incompressible data. - Modified `src/compress/mod.rs` to add `while lit_remain >= 4` loops. - Uses `write_literals_2` twice within the unrolled loop. - Relies on existing buffer space checks (which cover the worst case expansion). Performance: - Throughput for "Compress Parallel Incompressible" improved by ~0.6% (234.2 MiB/s -> 235.7 MiB/s). - Verified correctness with `cargo test`. Co-authored-by: 404Setup <[email protected]>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
This PR optimizes the literal writing path in the Deflate compressor.
💡 What
Unrolled the loop responsible for writing literal bytes to the bitstream in
compress_greedy_blockandwrite_dynamic_block_with_sequences. Instead of processing literals in chunks of 2, it now processes them in chunks of 4 (where possible).🎯 Why
Writing literals is a hot path for incompressible data or data with few matches. The previous implementation handled 2 literals per iteration. Increasing this to 4 reduces loop overhead (branching, index updates) and allows better instruction pipelining.
📊 Impact
Benchmark
Compress Parallel Incompressible(16MB random data) showed a throughput improvement of approximately 0.6%.While small, this optimization applies to the core compression loop and is purely beneficial (no downsides observed).
🔬 Measurement
Run
cargo bench --bench bench_main "Compress Parallel Incompressible". Note: You may need to ensurebench_data/data_random_L.binexists (generated byscripts/gen_bench_files.py).I also explored optimizing
adler32andcrc32tail handling but found that simple scalar loops or existing implementations were already optimal for the target microarchitecture (AVX2/AVX512), or regressions were observed with more complex unrolling. I reverted those changes to focus on this positive result.PR created automatically by Jules for task 4217107458636207726 started by @404Setup