2.1 KiB
2.1 KiB
CLAUDE.md for go-llm v2
Build and Test Commands
- Build project:
cd v2 && go build ./... - Run all tests:
cd v2 && go test ./... - Run specific test:
cd v2 && go test -v -run <TestName> ./... - Tidy dependencies:
cd v2 && go mod tidy - Vet:
cd v2 && go vet ./...
Code Style Guidelines
- Indentation: Standard Go tabs
- Naming:
camelCasefor unexported,PascalCasefor exported - Error Handling: Always check and handle errors immediately. Wrap with
fmt.Errorf("%w: ...", err) - Imports: Standard library first, then third-party, then internal packages
Package Structure
- Root package
llm— public API (Client, Model, Chat, ToolBox, Message types) provider/— Provider interface that backends implementopenai/,anthropic/,google/— Provider implementationsollama/— Native/api/chatprovider, used by bothllm.Ollama()(local) andllm.OllamaCloud(apiKey)(cloud).tools/— Ready-to-use sample tools (WebSearch, Browser, Exec, ReadFile, WriteFile, HTTP)sandbox/— Isolated Linux container environments via Proxmox LXC + SSHinternal/schema/— JSON Schema generation from Go structsinternal/imageutil/— Image compression utilities
Key Design Decisions
- Unified
Messagetype instead of marker interfaces map[string]anyJSON Schema (no provider coupling)- Tool functions return
(string, error), use standardcontext.Context Chat.Send()auto-loops tool calls;Chat.SendRaw()for manual control- MCP one-call connect:
MCPStdioServer(ctx, cmd, args...) - Streaming via pull-based
StreamReader.Next() - Middleware for logging, retry, timeout, usage tracking
- Ollama uses the native
/api/chatAPI rather than the OpenAI-compat/v1endpoint. Native API supportsthink: falsefor thinking-capable models, has more reliable tool calling, and is approximately 15-20% lower latency. Both local and cloud share the same provider; only the apiKey/baseURL differ.llm.Ollama()targetshttp://localhost:11434with no Authorization header;llm.OllamaCloud(key)targetshttps://ollama.comwithAuthorization: Bearer <key>.