- Introduce `MCPServer` to support connecting to MCP servers via stdio, SSE, or HTTP. - Implement tool fetching, management, and invocation through MCP. - Add `WithMCPServer` method to `ToolBox` for seamless tool integration. - Extend schema package to handle raw JSON schemas for MCP tools. - Update documentation with MCP usage guidelines and examples.
3.1 KiB
3.1 KiB
CLAUDE.md for go-llm
Build and Test Commands
- Build project:
go build ./... - Run all tests:
go test ./... - Run specific test:
go test -v -run <TestName> ./... - Tidy dependencies:
go mod tidy
Code Style Guidelines
- Indentation: Use standard Go tabs for indentation.
- Naming:
- Use
camelCasefor internal/private variables and functions. - Use
PascalCasefor exported types, functions, and struct fields. - Interface names should be concise (e.g.,
LLM,ChatCompletion).
- Use
- Error Handling:
- Always check and handle errors immediately.
- Wrap errors with context using
fmt.Errorf("%w: ...", err). - Use the project's internal
Errorstruct inerror.gowhen differentiating between error types is needed.
- Project Structure:
llm.go: Contains core interfaces (LLM,ChatCompletion) and shared types (Message,Role,Image).- Provider implementations are in
openai.go,anthropic.go, andgoogle.go. - Schema definitions for tool calling are in the
schema/directory. mcp.go: MCP (Model Context Protocol) client integration for connecting to MCP servers.
- Imports: Organize imports into groups: standard library, then third-party libraries.
- Documentation: Use standard Go doc comments for exported symbols.
- README.md: The README.md file should always be kept up to date with any significant changes to the project.
CLI Tool
- Build CLI:
go build ./cmd/llm - Run CLI:
./llm(orllm.exeon Windows) - Run without building:
go run ./cmd/llm
CLI Features
- Interactive TUI for testing all go-llm features
- Support for OpenAI, Anthropic, and Google providers
- Image input (file path, URL, or base64)
- Tool/function calling with demo tools
- Temperature control and settings
Key Bindings
Enter- Send messageCtrl+I- Add imageCtrl+T- Toggle tools panelCtrl+P- Change providerCtrl+M- Change modelCtrl+S- SettingsCtrl+N- New conversationEsc- Exit/Cancel
MCP (Model Context Protocol) Support
The library supports connecting to MCP servers to use their tools. MCP servers can be connected via:
- stdio: Run a command as a subprocess
- sse: Connect to an SSE endpoint
- http: Connect to a streamable HTTP endpoint
Usage Example
ctx := context.Background()
// Create and connect to an MCP server
server := &llm.MCPServer{
Name: "my-server",
Command: "my-mcp-server",
Args: []string{"--some-flag"},
}
if err := server.Connect(ctx); err != nil {
log.Fatal(err)
}
defer server.Close()
// Add the server to a toolbox
toolbox := llm.NewToolBox().WithMCPServer(server)
// Use the toolbox in requests - MCP tools are automatically available
req := llm.Request{
Messages: []llm.Message{{Role: llm.RoleUser, Text: "Use the MCP tool"}},
Toolbox: toolbox,
}
MCPServer Options
Name: Friendly name for loggingCommand: Command to run (for stdio transport)Args: Command argumentsEnv: Additional environment variablesURL: Endpoint URL (for sse/http transport)Transport: "stdio" (default), "sse", or "http"