fix(opencode): preserve reasoning providerMetadata across model switches#23104
fix(opencode): preserve reasoning providerMetadata across model switches#23104bainos wants to merge 5 commits intoanomalyco:devfrom
Conversation
|
The following comment was made by an LLM, it may be inaccurate: Based on my search, I found several related PRs that address similar issues with thinking blocks and message preservation: Related PRs (not exact duplicates):
These PRs are related in that they all involve preserving the integrity of thinking/reasoning blocks through message transformations, but none appear to be exact duplicates of PR #23104. Your PR specifically targets the No duplicate PRs found |
36c6c03 to
f2d8f29
Compare
|
The below comment is the summarization related to further long investigation Further investigation notes The fix is correct beyond the stated model-switch scenario.
If anything along that chain resolves differently between turns,
When context approaches the model limit, auto-compaction calls Regardless of the trigger, the fix is the right one. For text/tool parts, stripping provider metadata on model change is correct — prevents e.g. OpenAI The affected line: // before
...(differentModel ? {} : { providerMetadata: part.metadata }),
// after
providerMetadata: part.metadata,Refers to #1439 |
|
This actually does not work, but is a "seed' I'm working on. |
d14ddc1 to
545f6b9
Compare
|
Patch revised — fixed a second issue: when Note: this fix operates at the session reconstruction layer. There are complementary bugs at the provider transform layer — #23755 addresses three cases where thinking blocks get dropped or modified before even reaching this layer. Without those fixes, this one alone may not be sufficient in all scenarios. Related prior attempts: #16750, #21370. Testing in progress |
545f6b9 to
bae7c48
Compare
Bedrock rejects compaction requests when thinking/redacted_thinking blocks are present with missing or corrupted signatures. The compaction model only needs conversation content to produce a summary — reasoning blocks are not required and cause API errors at scale. Add stripReasoning option to toModelMessagesEffect and enable it in the compaction path.
bae7c48 to
d2cf250
Compare
Issue for this PR
Closes #22813
Type of change
What does this PR do?
message-v2.tsstripsproviderMetadatafrom reasoning parts when the message was produced by a different model. This is correct for text/tool parts but wrong for Anthropic thinking blocks ↪— they carry a cryptographic signature that Anthropic verifies on every subsequent turn. Stripping it causesthinking blocks cannot be modifiederrors mid-conversation.Fix: always preserve
providerMetadataon reasoning parts regardless of thedifferentModelflag. One-line change.How did you verify your code works?
Added regression test
preserves reasoning providerMetadata even when assistant model differsintest/session/message-v2.test.ts. Full test suite passes (pre-existing failures unrelated).Checklist