The Ollama provider now targets /api/chat directly via the native provider
introduced in the previous commits. Public API is unchanged for callers
that go through llm.Ollama() (and is extended by Task 5's OllamaCloud()
constructor).
DefaultBaseURL was renamed to DefaultLocalBaseURL (without the trailing
/v1 segment used by the OpenAI-compat path). registry.go is updated
correspondingly; no other callers referenced the old name.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reads Ollama's NDJSON stream (one JSON object per line) and emits
provider.StreamEvent values for text, thinking, tool-call start/delta/end,
and a final Done event carrying assembled Response and Usage. Uses
bufio.Scanner with a 4 MiB max-line buffer so multi-KB tool-call deltas
parse cleanly, and accepts tool-call arguments delivered either as
escaped string fragments (delta-style) or a complete JSON object
(one-shot).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Non-streaming /api/chat support including:
- Vision via images: []base64
- Tool calls on assistant + tool-role response messages
- think field accepting string reasoning levels (or "true"/"false")
- Authorization header when apiKey is non-empty (cloud mode)
Tool-call arguments are passed as JSON objects to the wire and surfaced
as JSON-string Arguments on provider.ToolCall. Tool calls are assigned
synthetic IDs (tc_<index>) when Ollama omits one, so the round-trip
back as an assistant tool_calls + tool-role message remains correlated.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds wire types and a Provider struct that will replace the
openaicompat-based Ollama shim with a native /api/chat implementation.
Complete and Stream methods are stubs; subsequent commits implement them.
Adjusts the existing ollama.go to drop the type alias on
openaicompat.Provider (renaming the legacy shim to a temporary internal
helper) so the new native Provider type does not collide. Public New()
still returns the openaicompat-backed provider until Task 4 swaps it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Five OpenAI-compatible providers join the library as first-class constructors
(llm.DeepSeek, llm.Moonshot, llm.XAI, llm.Groq, llm.Ollama). Their wire-level
implementation is shared via a new v2/openaicompat package which is the
extracted guts of the old v2/openai provider; each provider supplies its own
Rules value to declare per-model constraints (e.g., DeepSeek Reasoner rejects
tools and temperature, Moonshot/xAI accept images only on *-vision* models,
Groq rejects audio input). v2/openai itself becomes a thin wrapper that sets
RestrictTemperature for o-series and gpt-5 models.
A new provider registry (v2/registry.go) exposes llm.Providers() and drives
the TUI's provider picker so adding a provider in future is a single-file
change.
The TUI at cmd/llm was migrated from v1 to v2 and moved to v2/cmd/llm. With
nothing else depending on v1, the v1 code at the repo root (all .go files,
schema/, internal/, provider/, root go.mod/go.sum) is deleted.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>