feat: add audio input support to v2 providers
All checks were successful
CI / Lint (push) Successful in 9m37s
CI / Root Module (push) Successful in 10m53s
CI / V2 Module (push) Successful in 11m9s

Add Audio struct alongside Image for sending audio attachments to
multimodal LLMs. OpenAI uses input_audio content parts (wav/mp3),
Google Gemini uses genai.NewPartFromBytes, and Anthropic skips
audio gracefully since it's not supported.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-08 21:00:56 -05:00
parent fc2218b5fe
commit 7e1705c385
6 changed files with 137 additions and 1 deletions

View File

@@ -204,6 +204,8 @@ func (p *Provider) buildRequest(req provider.Request) anth.MessagesRequest {
}
}
// Audio is not supported by Anthropic — skip silently.
// Merge consecutive same-role messages (Anthropic requires alternating)
if len(msgs) > 0 && msgs[len(msgs)-1].Role == role {
msgs[len(msgs)-1].Content = append(msgs[len(msgs)-1].Content, m.Content...)