You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using DeepSeek V4 Pro or Flash models with thinking mode enabled (default), multi-turn conversations fail with:
The `reasoning_content` in the thinking mode must be passed back to the API.
This happens reliably on v1.14.31 (latest release) in conversations with multiple tool-call turns, regardless of whether the conversation is fresh or resumed from history.
Why the existing fixes are insufficient
The release notes mention:
v1.14.24: "Fixed DeepSeek assistant messages so reasoning is always included"
v1.14.29: "DeepSeek OpenAI-compatible setups now keep reasoning_content interleaved by default"
These handle some code paths but not all. The fundamental issue is that reasoning_content must be present on every assistant message in the conversation history — historical messages from DB replay, string-content messages, and messages processed on the second interleaved pass all need it. The current code has gaps.
The fix exists — three separate PRs wrote it
Three contributors have submitted PRs with the complete fix:
Provider: @ai-sdk/openai-compatible via LLM Gateway (pinned to DeepSeek API)
Model: deepseek-v4-pro with reasoning: true and interleaved: { field: "reasoning_content" }
Also reproduced with direct DeepSeek API (bypassing gateway)
Request
Please review and merge one of the existing complete fixes (#24250, #24895) so that DeepSeek V4 with thinking mode works reliably in multi-turn conversations.
Bug Description
When using DeepSeek V4 Pro or Flash models with thinking mode enabled (default), multi-turn conversations fail with:
This happens reliably on v1.14.31 (latest release) in conversations with multiple tool-call turns, regardless of whether the conversation is fresh or resumed from history.
Why the existing fixes are insufficient
The release notes mention:
These handle some code paths but not all. The fundamental issue is that
reasoning_contentmust be present on every assistant message in the conversation history — historical messages from DB replay, string-content messages, and messages processed on the second interleaved pass all need it. The current code has gaps.The fix exists — three separate PRs wrote it
Three contributors have submitted PRs with the complete fix:
reasoning_contentinjection for all assistant messages including DB-replayed ones. Commit:41eb35a86dd22fproviderOptionsKey()helper. Commit:c42e105All three were closed without merge.
Environment
@ai-sdk/openai-compatiblevia LLM Gateway (pinned to DeepSeek API)deepseek-v4-prowithreasoning: trueandinterleaved: { field: "reasoning_content" }Request
Please review and merge one of the existing complete fixes (#24250, #24895) so that DeepSeek V4 with thinking mode works reliably in multi-turn conversations.