Files
go-llm/v2/CLAUDE.md
T
steve 5d4c4f91af
CI / Lint (push) Failing after 32s
CI / Root Module (push) Failing after 35s
CI / V2 Module (push) Successful in 1m55s
docs(v2): update CLAUDE.md for native Ollama provider
2026-05-01 18:31:00 +00:00

2.1 KiB

CLAUDE.md for go-llm v2

Build and Test Commands

  • Build project: cd v2 && go build ./...
  • Run all tests: cd v2 && go test ./...
  • Run specific test: cd v2 && go test -v -run <TestName> ./...
  • Tidy dependencies: cd v2 && go mod tidy
  • Vet: cd v2 && go vet ./...

Code Style Guidelines

  • Indentation: Standard Go tabs
  • Naming: camelCase for unexported, PascalCase for exported
  • Error Handling: Always check and handle errors immediately. Wrap with fmt.Errorf("%w: ...", err)
  • Imports: Standard library first, then third-party, then internal packages

Package Structure

  • Root package llm — public API (Client, Model, Chat, ToolBox, Message types)
  • provider/ — Provider interface that backends implement
  • openai/, anthropic/, google/ — Provider implementations
  • ollama/ — Native /api/chat provider, used by both llm.Ollama() (local) and llm.OllamaCloud(apiKey) (cloud).
  • tools/ — Ready-to-use sample tools (WebSearch, Browser, Exec, ReadFile, WriteFile, HTTP)
  • sandbox/ — Isolated Linux container environments via Proxmox LXC + SSH
  • internal/schema/ — JSON Schema generation from Go structs
  • internal/imageutil/ — Image compression utilities

Key Design Decisions

  1. Unified Message type instead of marker interfaces
  2. map[string]any JSON Schema (no provider coupling)
  3. Tool functions return (string, error), use standard context.Context
  4. Chat.Send() auto-loops tool calls; Chat.SendRaw() for manual control
  5. MCP one-call connect: MCPStdioServer(ctx, cmd, args...)
  6. Streaming via pull-based StreamReader.Next()
  7. Middleware for logging, retry, timeout, usage tracking
  8. Ollama uses the native /api/chat API rather than the OpenAI-compat /v1 endpoint. Native API supports think: false for thinking-capable models, has more reliable tool calling, and is approximately 15-20% lower latency. Both local and cloud share the same provider; only the apiKey/baseURL differ. llm.Ollama() targets http://localhost:11434 with no Authorization header; llm.OllamaCloud(key) targets https://ollama.com with Authorization: Bearer <key>.