Skip to content

perf(tokenizer): retain only decode lookback tokens in DecodeStream#9204

Open
tangcy98 wants to merge 1 commit intoai-dynamo:mainfrom
tangcy98:opt-decodestream-copy
Open

perf(tokenizer): retain only decode lookback tokens in DecodeStream#9204
tangcy98 wants to merge 1 commit intoai-dynamo:mainfrom
tangcy98:opt-decodestream-copy

Conversation

@tangcy98
Copy link
Copy Markdown
Contributor

@tangcy98 tangcy98 commented May 6, 2026

Overview:

This PR reduces DecodeStream initialization overhead for long prompts by retaining only the prompt-token lookback window needed for incremental detokenization.

Previously, DecodeStream::new copied the full prompt token list into its internal buffer, even though subsequent decoding only reads from prompt_len - INITIAL_INCREMENTAL_DETOKENIZATION_OFFSET. For long ISL requests, this made the detokenizer keep an unnecessary O(ISL) copy of prompt tokens.

This change keeps only the retained decode window, reducing the initial DecodeStream prompt-token copy from O(ISL) to O(5), while preserving the exact token slices used by DecodeStream::step.

Details:

  • Compute the retained prompt start from INITIAL_INCREMENTAL_DETOKENIZATION_OFFSET.
  • Initialize DecodeStream::all_token_ids with only prompt_token_ids[retained_start..].
  • Convert prefix_offset and read_offset to local offsets within the retained buffer.
  • Preserve existing incremental detokenization behavior:
    • prompt tokens are still used as decode context
    • returned text remains continuation-only
    • UTF-8 partial-token buffering behavior is unchanged
    • skip_special_tokens behavior is unchanged

This does not change the PreprocessedRequest.token_ids path used by the engine/router. It only avoids an extra full-prompt copy inside the detokenizer state.

Where should the reviewer start?

Start with:

  • lib/tokenizers/src/lib.rs

The key logic is in DecodeStream::new. The important invariant is that the slices passed to decode() in DecodeStream::step() are content-equivalent to the previous implementation, just indexed relative to the retained lookback window.

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • Relates to tokenizer/detokenizer performance for long-context requests

Summary by CodeRabbit

  • Refactor
    • Updated tokenization stream processing with improved initialization patterns.

@tangcy98 tangcy98 requested a review from a team as a code owner May 6, 2026 08:05
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 6, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@github-actions github-actions Bot added the perf label May 6, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 6, 2026

👋 Hi tangcy98! Thank you for contributing to ai-dynamo/dynamo.

Just a reminder: The NVIDIA Test Github Validation CI runs an essential subset of the testing framework to quickly catch errors.Your PR reviewers may elect to test the changes comprehensively before approving your changes.

🚀

@github-actions github-actions Bot added the external-contribution Pull request is from an external contributor label May 6, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 6, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b8111f02-7039-4b2f-a63b-f98f97501806

📥 Commits

Reviewing files that changed from the base of the PR and between 87c5090 and 8ce0bad.

📒 Files selected for processing (1)
  • lib/tokenizers/src/lib.rs

Walkthrough

DecodeStream::new initialization is refactored to implement incremental detokenization windowing. The method now computes an initial offset, slices tokens to form a sliding window, and initializes internal buffers with the windowed tokens and offsets, replacing the previous approach of retaining the full input token stream.

Changes

Incremental Detokenization Windowing

Layer / File(s) Summary
Core Implementation
lib/tokenizers/src/lib.rs
DecodeStream::new computes an initial prefix offset from input length, slices tokens to a window, stores the windowed tokens in all_token_ids, and initializes prefix_offset to 0 and read_offset to the window length instead of retaining full input with large initial prefix offset.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main optimization: retaining only the necessary decode lookback tokens in DecodeStream to reduce initialization overhead.
Description check ✅ Passed The pull request description follows the required template with all sections completed: Overview clearly explains the performance improvement, Details describe the specific changes, Where should the reviewer start identifies key files, and Related Issues documents the motivation.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

external-contribution Pull request is from an external contributor perf size/S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants