feat: add DeepSeek, Moonshot, xAI, Groq, Ollama; drop v1; migrate TUI to v2
Five OpenAI-compatible providers join the library as first-class constructors (llm.DeepSeek, llm.Moonshot, llm.XAI, llm.Groq, llm.Ollama). Their wire-level implementation is shared via a new v2/openaicompat package which is the extracted guts of the old v2/openai provider; each provider supplies its own Rules value to declare per-model constraints (e.g., DeepSeek Reasoner rejects tools and temperature, Moonshot/xAI accept images only on *-vision* models, Groq rejects audio input). v2/openai itself becomes a thin wrapper that sets RestrictTemperature for o-series and gpt-5 models. A new provider registry (v2/registry.go) exposes llm.Providers() and drives the TUI's provider picker so adding a provider in future is a single-file change. The TUI at cmd/llm was migrated from v1 to v2 and moved to v2/cmd/llm. With nothing else depending on v1, the v1 code at the repo root (all .go files, schema/, internal/, provider/, root go.mod/go.sum) is deleted. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
This commit is contained in:
@@ -1,88 +1,31 @@
|
||||
# CLAUDE.md for go-llm
|
||||
|
||||
## Build and Test Commands
|
||||
- Build project: `go build ./...`
|
||||
- Run all tests: `go test ./...`
|
||||
- Run specific test: `go test -v -run <TestName> ./...`
|
||||
- Tidy dependencies: `go mod tidy`
|
||||
All Go code now lives under `v2/`. The module path is
|
||||
`gitea.stevedudenhoeffer.com/steve/go-llm/v2`. There is no module at the
|
||||
repository root anymore; the v1 code at the root was deleted after all
|
||||
consumers migrated to v2.
|
||||
|
||||
## Code Style Guidelines
|
||||
- **Indentation**: Use standard Go tabs for indentation.
|
||||
- **Naming**:
|
||||
- Use `camelCase` for internal/private variables and functions.
|
||||
- Use `PascalCase` for exported types, functions, and struct fields.
|
||||
- Interface names should be concise (e.g., `LLM`, `ChatCompletion`).
|
||||
- **Error Handling**:
|
||||
- Always check and handle errors immediately.
|
||||
- Wrap errors with context using `fmt.Errorf("%w: ...", err)`.
|
||||
- Use the project's internal `Error` struct in `error.go` when differentiating between error types is needed.
|
||||
- **Project Structure**:
|
||||
- `llm.go`: Contains core interfaces (`LLM`, `ChatCompletion`) and shared types (`Message`, `Role`, `Image`).
|
||||
- Provider implementations are in `openai.go`, `anthropic.go`, and `google.go`.
|
||||
- Schema definitions for tool calling are in the `schema/` directory.
|
||||
- `mcp.go`: MCP (Model Context Protocol) client integration for connecting to MCP servers.
|
||||
- **Imports**: Organize imports into groups: standard library, then third-party libraries.
|
||||
- **Documentation**: Use standard Go doc comments for exported symbols.
|
||||
- **README.md**: The README.md file should always be kept up to date with any significant changes to the project.
|
||||
See `v2/CLAUDE.md` for build/test commands and per-package guidance.
|
||||
|
||||
## CLI Tool
|
||||
- Build CLI: `go build ./cmd/llm`
|
||||
- Run CLI: `./llm` (or `llm.exe` on Windows)
|
||||
- Run without building: `go run ./cmd/llm`
|
||||
## CLI
|
||||
|
||||
### CLI Features
|
||||
- Interactive TUI for testing all go-llm features
|
||||
- Support for OpenAI, Anthropic, and Google providers
|
||||
- Image input (file path, URL, or base64)
|
||||
- Tool/function calling with demo tools
|
||||
- Temperature control and settings
|
||||
The interactive TUI lives at `v2/cmd/llm`:
|
||||
|
||||
### Key Bindings
|
||||
- `Enter` - Send message
|
||||
- `Ctrl+I` - Add image
|
||||
- `Ctrl+T` - Toggle tools panel
|
||||
- `Ctrl+P` - Change provider
|
||||
- `Ctrl+M` - Change model
|
||||
- `Ctrl+S` - Settings
|
||||
- `Ctrl+N` - New conversation
|
||||
- `Esc` - Exit/Cancel
|
||||
|
||||
## MCP (Model Context Protocol) Support
|
||||
|
||||
The library supports connecting to MCP servers to use their tools. MCP servers can be connected via:
|
||||
- **stdio**: Run a command as a subprocess
|
||||
- **sse**: Connect to an SSE endpoint
|
||||
- **http**: Connect to a streamable HTTP endpoint
|
||||
|
||||
### Usage Example
|
||||
```go
|
||||
ctx := context.Background()
|
||||
|
||||
// Create and connect to an MCP server
|
||||
server := &llm.MCPServer{
|
||||
Name: "my-server",
|
||||
Command: "my-mcp-server",
|
||||
Args: []string{"--some-flag"},
|
||||
}
|
||||
if err := server.Connect(ctx); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer server.Close()
|
||||
|
||||
// Add the server to a toolbox
|
||||
toolbox := llm.NewToolBox().WithMCPServer(server)
|
||||
|
||||
// Use the toolbox in requests - MCP tools are automatically available
|
||||
req := llm.Request{
|
||||
Messages: []llm.Message{{Role: llm.RoleUser, Text: "Use the MCP tool"}},
|
||||
Toolbox: toolbox,
|
||||
}
|
||||
```
|
||||
cd v2 && go run ./cmd/llm
|
||||
```
|
||||
|
||||
### MCPServer Options
|
||||
- `Name`: Friendly name for logging
|
||||
- `Command`: Command to run (for stdio transport)
|
||||
- `Args`: Command arguments
|
||||
- `Env`: Additional environment variables
|
||||
- `URL`: Endpoint URL (for sse/http transport)
|
||||
- `Transport`: "stdio" (default), "sse", or "http"
|
||||
It iterates `llm.Providers()` so every registered provider (OpenAI, Anthropic,
|
||||
Google, DeepSeek, Moonshot, xAI, Groq, Ollama) appears in the picker
|
||||
automatically. Status is derived from each provider's env var; Ollama shows as
|
||||
"(local)" because it needs no key.
|
||||
|
||||
### Key bindings
|
||||
- `Enter` — Send message
|
||||
- `Ctrl+I` — Add image
|
||||
- `Ctrl+T` — Toggle tools panel
|
||||
- `Ctrl+P` — Change provider
|
||||
- `Ctrl+M` — Change model
|
||||
- `Ctrl+S` — Settings
|
||||
- `Ctrl+N` — New conversation
|
||||
- `Esc` — Exit/Cancel
|
||||
|
||||
Reference in New Issue
Block a user