Skip to content

DeepSeek V4: reasoning_content 400 error persists on v1.14.31 — complete fix exists in unmerged PRs #25311

@cameronmpalmer

Description

@cameronmpalmer

Bug Description

When using DeepSeek V4 Pro or Flash models with thinking mode enabled (default), multi-turn conversations fail with:

The `reasoning_content` in the thinking mode must be passed back to the API.

This happens reliably on v1.14.31 (latest release) in conversations with multiple tool-call turns, regardless of whether the conversation is fresh or resumed from history.

Why the existing fixes are insufficient

The release notes mention:

  • v1.14.24: "Fixed DeepSeek assistant messages so reasoning is always included"
  • v1.14.29: "DeepSeek OpenAI-compatible setups now keep reasoning_content interleaved by default"

These handle some code paths but not all. The fundamental issue is that reasoning_content must be present on every assistant message in the conversation history — historical messages from DB replay, string-content messages, and messages processed on the second interleaved pass all need it. The current code has gaps.

The fix exists — three separate PRs wrote it

Three contributors have submitted PRs with the complete fix:

  1. fix(provider): complete DeepSeek reasoning_content round-trip for multi-turn conversations #24250 by @knefenk — Covers all three layers: auto-enable interleaved, dynamic SDK key, and unconditional reasoning_content injection for all assistant messages including DB-replayed ones. Commit: 41eb35a
  2. fix: preserve existing reasoning_content on second interleaved pass (#24146 follow-up) #24428 by @claudianus — Fixes second-pass regression where empty reasoningText overwrites preserved providerOptions values. Commit: 86dd22f
  3. fix: generalize reasoning_content injection across AI SDK providers #24895 by @Malkovich-666 — Generalizes the fix across all AI SDK providers with a providerOptionsKey() helper. Commit: c42e105

All three were closed without merge.

Environment

  • OpenCode v1.14.31 (Linux binary)
  • Provider: @ai-sdk/openai-compatible via LLM Gateway (pinned to DeepSeek API)
  • Model: deepseek-v4-pro with reasoning: true and interleaved: { field: "reasoning_content" }
  • Also reproduced with direct DeepSeek API (bypassing gateway)

Request

Please review and merge one of the existing complete fixes (#24250, #24895) so that DeepSeek V4 with thinking mode works reliably in multi-turn conversations.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions