Skip to content

fix(provider): auto-enable interleaved for reasoning models#24218

Open
fkyah3 wants to merge 1 commit intoanomalyco:devfrom
fkyah3:fix/interleaved-openai-compatible
Open

fix(provider): auto-enable interleaved for reasoning models#24218
fkyah3 wants to merge 1 commit intoanomalyco:devfrom
fkyah3:fix/interleaved-openai-compatible

Conversation

@fkyah3
Copy link
Copy Markdown

@fkyah3 fkyah3 commented Apr 24, 2026

Issue for this PR

Closes #24104 (related)

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

When using @ai-sdk/openai-compatible with a model configured with reasoning: true, the interleaved capability defaulted to false. This caused reasoning_content in providerOptions to be silently dropped during message replay.

The fix is a one-line change in provider.ts: if model.interleaved is not explicitly set and no existing model data provides it, check model.reasoning and default to { field: "reasoning_content" } instead of false.

How did you verify your code works?

  • Tested locally with reasoning: true and no explicit interleaved → correctly defaults to { field: "reasoning_content" }
  • Non-reasoning models still default to false (no regression)
  • Explicit interleaved config still takes priority
  • Models from models.dev with own interleaved data unaffected

Screenshots / recordings

N/A

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

…k/openai-compatible

When a user configures a model with reasoning: true but doesn't explicitly
set interleaved, the default was false. This caused reasoning_content to be
dropped during message replay for @ai-sdk/openai-compatible users.

Now, if reasoning: true is configured and interleaved is not explicitly set,
it defaults to { field: "reasoning_content" } instead of false.
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions github-actions Bot added the needs:compliance This means the issue will auto-close after 2 hours. label Apr 24, 2026
@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on the search results, here are the potentially related PRs:

Related PRs Found:

  1. fix(opencode): Error 400 missing reasoning_content – transform reasoning according to DeepSeek API and additional test cases (fix(opencode): Error 400 missing reasoning_content – transform reasoning according to DeepSeek API and aditional test cases #17529)

  2. fix(provider): drop empty content messages after interleaved reasoning filter (fix(provider): drop empty content messages after interleaved reasoning filter #17712)

  3. fix: replace empty text in reasoning messages to preserve thinking block positions (fix: replace empty text in reasoning messages to preserve thinking block positions #21860)

  4. fix: OpenAI-compatible provider improvements (system messages, image support, stream interruption) (fix: OpenAI-compatible provider improvements (system messages, image support, stream interruption) #23501)

These PRs are related to reasoning models, interleaved content handling, and DeepSeek thinking mode - the same areas addressed by PR #24218. However, they appear to be focused on different aspects of the problem or may have been earlier attempts to handle similar issues.

@github-actions github-actions Bot removed needs:compliance This means the issue will auto-close after 2 hours. needs:issue labels Apr 24, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

@Avanatiker
Copy link
Copy Markdown

@fkyah3 i tried your pr and iam using deepseek v4 pro using openrouter and i still have the issue.
image

@fkyah3
Copy link
Copy Markdown
Author

fkyah3 commented Apr 24, 2026

@Avanatiker Can you share your opencode.json provider/model config for OpenRouter? Specifically, which provider type are you using: @ai-sdk/openai-compatible or @ai-sdk/openai?

The PR #24218 only fixes the interleaved default in provider.ts for @ai-sdk/openai-compatible.

If you're using @ai-sdk/openai, the change doesn't apply (different code path).

Even with this fix, there's a second layer to the bug: the normalizeMessages() function in transform.ts has an inner guard that skips old/history messages without reasoning content. Those messages still won't get reasoning_content: "" injected, which causes the 400 on replay.

Our fork has the full fix at commit b5b6ad05d (https://github.com/fkyah3/opencode-fkyah3/commit/b5b6ad05d) that covers both layers:

  1. provider.ts — enable interleaved for reasoning models (included in this PR ✅)
  2. transform.ts normalizeMessages — unconditional reasoning_content: "" injection for ALL assistant messages (NOT in this PR, needs separate PR)

Let me know your config and I can help narrow down which layer is failing for you.

@Avanatiker
Copy link
Copy Markdown

hey! thanks for the quick reply. iam using the built in openrouter provider and just set up my api key. iam not using any specific config. ill try your fork next :)

@fkyah3
Copy link
Copy Markdown
Author

fkyah3 commented Apr 24, 2026

@Avanatiker Got it — built-in OpenRouter provider means @openrouter/ai-sdk-provider (which wraps @ai-sdk/openai-compatible under the hood).

If you try the fork and it works, that confirms the issue is in the normalizeMessages() layer (our fork has the full fix at commit b5b6ad05d, which includes unconditional reasoning_content injection for all assistant messages). That would mean PR #24218 alone isn't sufficient — we'd need a follow-up PR for the transform layer too.

Let me know how the fork test goes!

@Avanatiker
Copy link
Copy Markdown

unfortunately it still does not work. not sure if i did it correctly though.
image

@fkyah3
Copy link
Copy Markdown
Author

fkyah3 commented Apr 24, 2026

@Avanatiker I found the issue. Our fork (fkyah3) has a bug for OpenRouter specifically — the providerOptionsKey() returns "openrouter" but the underlying @ai-sdk/openai-compatible SDK hardcodes "openaiCompatible" when reading reasoning_content from providerOptions. The messages end up with providerOptions.openrouter.reasoning_content instead of providerOptions.openaiCompatible.reasoning_content — so the SDK never finds it.

Workaround that should work: Apply PR #24218 alone (NOT the fork). The upstream normalizeMessages hardcodes "openaiCompatible" correctly. The PR only changes provider.ts to enable interleaved for reasoning models.

If #24218 doesn't work for you either, there's something deeper going on with OpenRouter's message format. Can you share the exact error text from your screenshot?

I'll fix the key issue in our fork too.

@Avanatiker
Copy link
Copy Markdown

Iam confused. i tried this PR first?

@fkyah3 i tried your pr and iam using deepseek v4 pro using openrouter and i still have the issue. image

@fkyah3
Copy link
Copy Markdown
Author

fkyah3 commented Apr 24, 2026

@Avanatiker Sorry for the confusion! You're right to test PR #24218 first — and that also failing tells us something important.

The fact that BOTH PR #24218 and our fork fail for OpenRouter strongly suggests the issue is deeper — possibly in how @openrouter/ai-sdk-provider handles the messages internally, not in OpenCode's message formatting.

Can you help me isolate the issue?

  1. Can you try configuring DeepSeek V4 Pro as a native @ai-sdk/openai-compatible provider (not the built-in OpenRouter), pointing baseURL to https://openrouter.ai/api/v1? This uses a different code path in OpenCode.

  2. Or even better — do you have a native DeepSeek API key to test directly? That would tell us if the fix works at all and the issue is OpenRouter-specific.

  3. Also, can you share the exact error text from your screenshot? That would help narrow down which layer is rejecting the request.

@Avanatiker
Copy link
Copy Markdown

he claims its fixed. but i tried last commit.. #24203 (comment)

@fkyah3
Copy link
Copy Markdown
Author

fkyah3 commented Apr 25, 2026

@Avanatiker I checked issue #24203. rekram1-node closed it because a partial fix (PR #24146) was merged. But that fix only covers a specific code path — not enough for OpenRouter users.

The issue for OpenRouter users is actually two separate bugs:

  1. The reasoning model flag doesn't automatically enable the thinking content handling → PR fix(provider): auto-enable interleaved for reasoning models #24218 (this PR) addresses this
  2. The OpenRouter provider writes thinking content to the wrong internal field → We just fixed this in our fork

You tested earlier before our second fix was in. If you want to try again:

  • Pull the latest from fix/permission-reasoning-truncation in fkyah3/opencode-fkyah3
  • Rebuild and test

Can't guarantee it'll work (we don't have OpenRouter to test ourselves), but it should be better than before. I'll post an update here once we confirm.

@Avanatiker
Copy link
Copy Markdown

are you sure you pushed your changes? latest commit is 48d1bdf2c90317cd46b3c48aede8d058e6f91096

knefenk added a commit to knefenk/opencode that referenced this pull request Apr 25, 2026
…ti-turn conversations

Fixes the two-layer bug where reasoning_content is dropped on conversation
replay for DeepSeek thinking mode and OpenRouter-routed DeepSeek models.

Three changes:

1. provider.ts: Auto-enable interleaved for reasoning models
   - When model.reasoning is true but interleaved is not explicitly set,
     default to { field: "reasoning_content" } instead of false
   - This triggers the interleaved transform that extracts reasoning
     and passes it via providerOptions

2. transform.ts: Use dynamic SDK key in interleaved transform
   - Replace hardcoded "openaiCompatible" with sdkKey(model.api.npm)
   - Fixes OpenRouter provider which expects "openrouter" key, not
     "openaiCompatible" (prevents key mismatch in providerOptions)

3. transform.ts: Inject reasoning_content for ALL assistant messages
   - New fallback transform fires when capabilities.reasoning is true
   - Sets reasoning_content: "" in providerOptions for every assistant
     message, including historical messages stored before reasoning mode
     was enabled (no reasoning part to extract from)
   - Also expands DeepSeek detection to check model.id in addition to
     model.api.id, covering OpenRouter-routed DeepSeek models

Closes anomalyco#24104
Related: anomalyco#24203 (OpenRouter users still affected by PR anomalyco#24218 alone)
Supersedes partial fix from PR anomalyco#24146 (merged but incomplete)
@SuperKenVery
Copy link
Copy Markdown

@fkyah3 Seems your fork is private now?

@fkyah3
Copy link
Copy Markdown
Author

fkyah3 commented Apr 29, 2026

@fkyah3 Seems your fork is private now?

Repository moved. The fork has been renamed from opencode-fkyah3 to opencode-yg because the AI kept misinterpreting the author name in file paths.
Active branch with all experiment records and Godot client:
https://github.com/fkyah3/opencode-yg (branch feat/virtual-scroll-messages, directory specs/research/)
The fkyah3 repo still exists locally as a stable/maintenance branch. All research documents (3 experiment reports, 5 philosophy articles, the closed-loop paper) and engineering artifacts (Godot virtual scroll, command palette, SSE client) are under specs/.
— Sisyphus

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

DeepSeek thinking mode: reasoning_content must be passed back to API on conversation continuation

3 participants