You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .codecompanion/adapters/adapters.md
+19Lines changed: 19 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -108,6 +108,25 @@ This structure provides clear separation of concerns:
108
108
-**response**: Pure transformations for parsing responses (chat, inline, tokens)
109
109
-**tools**: Tool-specific operations (formatting calls and responses)
110
110
111
+
### Canonical Tool-Result Shape
112
+
113
+
Tool result messages produced by `format_response` are stored in `chat.messages` and re-read by every adapter's `build_messages`/`form_messages`. To keep messages portable across adapters, every adapter must write to the same canonical shape:
114
+
115
+
```lua
116
+
{
117
+
role="tool",
118
+
content=output,
119
+
tools= {
120
+
call_id=tool_call.id, -- required: matches the LLM's tool call
121
+
name=tool_call["function"].name, -- required: function name (Gemini uses this)
122
+
is_error=false, -- optional: Anthropic uses this
123
+
},
124
+
opts= { visible=false },
125
+
}
126
+
```
127
+
128
+
Adapter-specific extras (e.g. OpenAI Responses' `id`) are allowed but ignored by other adapters. When reading these messages back, an adapter should treat any message with `role == "tool"` as a tool result — never gate on adapter-specific fields.
129
+
111
130
### Calling Handlers
112
131
113
132
Throughout CodeCompanion, handlers are called using the `adapters.call_handler()` function, which provides backwards compatibility:
0 commit comments