feat: comprehensive token usage tracking for V2
Add provider-specific usage details, fix streaming usage, and return usage from all high-level APIs (Chat.Send, Generate[T], Agent.Run). Breaking changes: - Chat.Send/SendMessage/SendWithImages now return (string, *Usage, error) - Generate[T]/GenerateWith[T] now return (T, *Usage, error) - Agent.Run/RunMessages now return (string, *Usage, error) New features: - Usage.Details map for provider-specific token breakdowns (reasoning, cached, audio, thoughts tokens) - OpenAI streaming now captures usage via StreamOptions.IncludeUsage - Google streaming now captures UsageMetadata from final chunk - UsageTracker.Details() for accumulated detail totals - ModelPricing and PricingRegistry for cost computation Closes #2 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -270,6 +270,16 @@ func (p *Provider) convertResponse(resp anth.MessagesResponse) provider.Response
|
||||
OutputTokens: resp.Usage.OutputTokens,
|
||||
TotalTokens: resp.Usage.InputTokens + resp.Usage.OutputTokens,
|
||||
}
|
||||
details := map[string]int{}
|
||||
if resp.Usage.CacheCreationInputTokens > 0 {
|
||||
details[provider.UsageDetailCacheCreationTokens] = resp.Usage.CacheCreationInputTokens
|
||||
}
|
||||
if resp.Usage.CacheReadInputTokens > 0 {
|
||||
details[provider.UsageDetailCachedInputTokens] = resp.Usage.CacheReadInputTokens
|
||||
}
|
||||
if len(details) > 0 {
|
||||
res.Usage.Details = details
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user