feat: comprehensive token usage tracking for V2
All checks were successful
CI / Lint (pull_request) Successful in 10m18s
CI / Root Module (pull_request) Successful in 11m4s
CI / V2 Module (pull_request) Successful in 11m5s

Add provider-specific usage details, fix streaming usage, and return
usage from all high-level APIs (Chat.Send, Generate[T], Agent.Run).

Breaking changes:
- Chat.Send/SendMessage/SendWithImages now return (string, *Usage, error)
- Generate[T]/GenerateWith[T] now return (T, *Usage, error)
- Agent.Run/RunMessages now return (string, *Usage, error)

New features:
- Usage.Details map for provider-specific token breakdowns
  (reasoning, cached, audio, thoughts tokens)
- OpenAI streaming now captures usage via StreamOptions.IncludeUsage
- Google streaming now captures UsageMetadata from final chunk
- UsageTracker.Details() for accumulated detail totals
- ModelPricing and PricingRegistry for cost computation

Closes #2

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-02 04:33:18 +00:00
parent 7e1705c385
commit 5b687839b2
17 changed files with 684 additions and 61 deletions

View File

@@ -270,6 +270,16 @@ func (p *Provider) convertResponse(resp anth.MessagesResponse) provider.Response
OutputTokens: resp.Usage.OutputTokens,
TotalTokens: resp.Usage.InputTokens + resp.Usage.OutputTokens,
}
details := map[string]int{}
if resp.Usage.CacheCreationInputTokens > 0 {
details[provider.UsageDetailCacheCreationTokens] = resp.Usage.CacheCreationInputTokens
}
if resp.Usage.CacheReadInputTokens > 0 {
details[provider.UsageDetailCachedInputTokens] = resp.Usage.CacheReadInputTokens
}
if len(details) > 0 {
res.Usage.Details = details
}
return res
}