Files
go-llm/v2/openai/openai.go
T
steve cbaf41f50c
CI / Root Module (push) Failing after 1m30s
CI / Lint (push) Failing after 1m1s
CI / V2 Module (push) Successful in 3m41s
feat(v2): add ReasoningLevel option; thinking/reasoning across providers
Introduces an opt-in level-based reasoning toggle (low/medium/high) that
each provider translates to its native parameter:

- Anthropic: thinking.budget_tokens (1024/8000/24000), with temperature
  forced to default and MaxTokens auto-grown above the budget.
- OpenAI/xAI/Groq via openaicompat: reasoning_effort string, gated by a
  new Rules.SupportsReasoning predicate so non-reasoning models don't
  receive the parameter. xAI uses Rules.MapReasoningEffort to remap
  "medium" to "high" since its API only accepts low|high.
- Google: thinking_config.thinking_budget + include_thoughts:true.
- DeepSeek: SupportsReasoning=false (reasoner is always-on; the
  reasoning_content trace was already extracted via openaicompat).

Reasoning content is surfaced as Response.Thinking on Complete and as
StreamEventThinking deltas during streaming. Provider-side: extracted
from Anthropic thinking content blocks, Google's part.Thought=true
parts, and the non-standard reasoning_content field that DeepSeek and
Groq emit (parsed out of raw JSON since openai-go doesn't type it).

Public API:
  - llm.ReasoningLevel + ReasoningLow/Medium/High constants
  - llm.WithReasoning(level) request option
  - Model.WithReasoning(level) for baked-in defaults
  - provider.Request.Reasoning, provider.Response.Thinking
  - provider.StreamEventThinking

Tests cover Rules-based gating, MapReasoningEffort, reasoning_content
extraction (Complete + Stream), Anthropic budget mapping, and
temperature suppression when thinking is enabled. Existing behavior is
unchanged when Reasoning is the empty string.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-25 03:58:42 +00:00

39 lines
1.4 KiB
Go

// Package openai implements the go-llm v2 provider interface for OpenAI.
//
// The actual wire-protocol logic lives in the shared openaicompat package;
// this file encodes OpenAI-specific Rules (temperature is rejected on o-series
// and gpt-5* models) and supplies the default base URL.
package openai
import (
"strings"
"gitea.stevedudenhoeffer.com/steve/go-llm/v2/openaicompat"
)
// DefaultBaseURL is the public OpenAI Chat Completions endpoint.
const DefaultBaseURL = "https://api.openai.com/v1"
// Provider is the OpenAI chat-completion provider. It's a type alias over
// openaicompat.Provider so existing callers using openai.Provider keep compiling.
type Provider = openaicompat.Provider
// New creates a new OpenAI provider. An empty baseURL uses DefaultBaseURL.
func New(apiKey string, baseURL string) *Provider {
if baseURL == "" {
baseURL = DefaultBaseURL
}
return openaicompat.New(apiKey, baseURL, openaicompat.Rules{
RestrictTemperature: isReasoningModel,
SupportsReasoning: isReasoningModel,
})
}
// isReasoningModel reports whether the named OpenAI model is a reasoning
// model (o-series or gpt-5*). Reasoning models reject a user-supplied
// temperature and accept a reasoning_effort parameter; everything else
// rejects reasoning_effort.
func isReasoningModel(model string) bool {
return strings.HasPrefix(model, "o") || strings.HasPrefix(model, "gpt-5")
}