Add go-llm v2: redesigned API for simpler LLM abstraction

v2 is a new Go module (v2/) with a dramatically simpler API:
- Unified Message type (no more Input marker interface)
- Define[T] for ergonomic tool creation with standard context.Context
- Chat session with automatic tool-call loop (agent loop)
- Streaming via pull-based StreamReader
- MCP one-call connect (MCPStdioServer, MCPHTTPServer, MCPSSEServer)
- Middleware support (logging, retry, timeout, usage tracking)
- Decoupled JSON Schema (map[string]any, no provider coupling)
- Sample tools: WebSearch, Browser, Exec, ReadFile, WriteFile, HTTP
- Providers: OpenAI, Anthropic, Google (all with streaming)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-07 20:00:08 -05:00
parent 85a848d96e
commit a4cb4baab5
28 changed files with 3598 additions and 0 deletions

37
v2/request.go Normal file
View File

@@ -0,0 +1,37 @@
package llm
// RequestOption configures a single completion request.
type RequestOption func(*requestConfig)
type requestConfig struct {
tools *ToolBox
temperature *float64
maxTokens *int
topP *float64
stop []string
}
// WithTools attaches a toolbox to the request.
func WithTools(tb *ToolBox) RequestOption {
return func(c *requestConfig) { c.tools = tb }
}
// WithTemperature sets the sampling temperature.
func WithTemperature(t float64) RequestOption {
return func(c *requestConfig) { c.temperature = &t }
}
// WithMaxTokens sets the maximum number of tokens to generate.
func WithMaxTokens(n int) RequestOption {
return func(c *requestConfig) { c.maxTokens = &n }
}
// WithTopP sets the nucleus sampling parameter.
func WithTopP(p float64) RequestOption {
return func(c *requestConfig) { c.topP = &p }
}
// WithStop sets stop sequences.
func WithStop(sequences ...string) RequestOption {
return func(c *requestConfig) { c.stop = sequences }
}