Skip to content

fix(opencode): preserve reasoning providerMetadata across model switches#23104

Draft
bainos wants to merge 5 commits intoanomalyco:devfrom
bainos:fix/thinking-block-signature-lost
Draft

fix(opencode): preserve reasoning providerMetadata across model switches#23104
bainos wants to merge 5 commits intoanomalyco:devfrom
bainos:fix/thinking-block-signature-lost

Conversation

@bainos
Copy link
Copy Markdown
Contributor

@bainos bainos commented Apr 17, 2026

Issue for this PR

Closes #22813

Type of change

  • Bug fix

What does this PR do?

message-v2.ts strips providerMetadata from reasoning parts when the message was produced by a different model. This is correct for text/tool parts but wrong for Anthropic thinking blocks ↪— they carry a cryptographic signature that Anthropic verifies on every subsequent turn. Stripping it causes thinking blocks cannot be modified errors mid-conversation.

Fix: always preserve providerMetadata on reasoning parts regardless of the differentModel flag. One-line change.

How did you verify your code works?

Added regression test preserves reasoning providerMetadata even when assistant model differs in test/session/message-v2.test.ts. Full test suite passes (pre-existing failures unrelated).

Checklist

  • I have tested my changes locally (in progress)
  • I have not included unrelated changes in this PR

@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search, I found several related PRs that address similar issues with thinking blocks and message preservation:

Related PRs (not exact duplicates):

  1. PR fix: replace empty text in reasoning messages to preserve thinking block positions #21860 - fix: replace empty text in reasoning messages to preserve thinking block positions

    • Also deals with preserving thinking block integrity during message processing
  2. PR fix: preserve thinking block signatures and fix compaction headroom asymmetry #14393 - fix: preserve thinking block signatures and fix compaction headroom asymmetry

    • Related to preserving thinking block signatures (similar concern to providerMetadata preservation)
  3. PR fix(provider): preserve redacted_thinking blocks and fix signature validation #12131 - fix(provider): preserve redacted_thinking blocks and fix signature validation

    • Addresses signature validation for thinking blocks, which is conceptually similar to your providerMetadata preservation fix

These PRs are related in that they all involve preserving the integrity of thinking/reasoning blocks through message transformations, but none appear to be exact duplicates of PR #23104. Your PR specifically targets the differentModel flag stripping behavior in message-v2.ts, which appears to be a distinct issue from these earlier fixes.

No duplicate PRs found

@bainos bainos force-pushed the fix/thinking-block-signature-lost branch from 36c6c03 to f2d8f29 Compare April 17, 2026 20:55
@bainos
Copy link
Copy Markdown
Contributor Author

bainos commented Apr 17, 2026

The below comment is the summarization related to further long investigation


Further investigation notes

The fix is correct beyond the stated model-switch scenario.
differentModel can be spuriously true without switching models.

prompt.ts:1415 stores messages with `model.id` (e.g. `global.anthropic.claude-sonnet-4-6`).
provider.ts:1333 sets model.api.id = model.api.id ?? model.id ?? modelID — these can diverge.
message-v2.ts:692 compares model.providerID/model.id vs msg.info.providerID/msg.info.modelID.

If anything along that chain resolves differently between turns, differentModel flips true on the same model.

compaction.ts:218 is the likely trigger at large context.

When context approaches the model limit, auto-compaction calls MessageV2.toModelMessagesEffect to reconstruct the full history. If differentModel is true at that point, every accumulated thinking block across all turns loses its signature simultaneously — which is why the error surfaces near the context limit rather than on the turn that first stripped a signature.

Regardless of the trigger, the fix is the right one.

For text/tool parts, stripping provider metadata on model change is correct — prevents e.g. OpenAI itemId leaking to Anthropic.
For reasoning parts it is never correct — Anthropic embeds a cryptographic signature in providerMetadata that it verifies on every subsequent turn. Once stripped it cannot be reconstructed.

The affected line: message-v2.ts:794

// before
...(differentModel ? {} : { providerMetadata: part.metadata }),
// after
providerMetadata: part.metadata,

Refers to #1439

@bainos
Copy link
Copy Markdown
Contributor Author

bainos commented Apr 21, 2026

This actually does not work, but is a "seed' I'm working on.
I will update this PR as soon I've tested the new patch I'm working on

@bainos
Copy link
Copy Markdown
Contributor Author

bainos commented Apr 23, 2026

Patch revised — fixed a second issue: when part.metadata is undefined, the previous version set providerMetadata: undefined explicitly. The AI SDK treats a present-but-undefined key differently from an absent key, causing Anthropic to still reject the block. Updated to a conditional spread so the key is only included when the value exists.

Note: this fix operates at the session reconstruction layer. There are complementary bugs at the provider transform layer — #23755 addresses three cases where thinking blocks get dropped or modified before even reaching this layer. Without those fixes, this one alone may not be sufficient in all scenarios. Related prior attempts: #16750, #21370.

Testing in progress

@bainos bainos marked this pull request as draft April 23, 2026 22:12
@bainos bainos force-pushed the fix/thinking-block-signature-lost branch from 545f6b9 to bae7c48 Compare April 29, 2026 12:25
rekram1-node and others added 5 commits April 30, 2026 16:54
Bedrock rejects compaction requests when thinking/redacted_thinking blocks
are present with missing or corrupted signatures. The compaction model only
needs conversation content to produce a summary — reasoning blocks are not
required and cause API errors at scale. Add stripReasoning option to
toModelMessagesEffect and enable it in the compaction path.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

fix: thinking block signature lost when model differs, breaking multi-turn with extended thinking

2 participants