← Back to changelog
December 2, 2024
New documentation for Google Vertex AI and Gemini tracing
Marc Klingen
Comprehensive guides for tracing Google Vertex AI and Gemini models with Langfuse
We’ve published comprehensive documentation on how to trace Google Vertex AI and Gemini models with Langfuse. This guide helps you implement observability for your Google Vertex AI applications, including the Gemini model family.
What’s Included
- Step-by-step Integration Guide: Detailed instructions for setting up Langfuse tracing with Google Vertex AI, including code examples and best practices.
- Framework Examples: Ready-to-use code snippets demonstrating how to implement tracing for application frameworks such as LangChain.
Key Tracing Features
- Automatic capture of prompts, completions, and tokens
- Latency tracking for model calls
- Cost calculation for Vertex AI usage
- Support for multi-modal inputs with Gemini
- Structured logging of model parameters and metadata
How to Get Started
- Check out our new documentation
- Follow the setup instructions to configure your Google Cloud credentials
- Implement tracing using our SDK examples
- Start monitoring your Vertex AI and Gemini model calls in the Langfuse UI
By the way, Google Vertex and Gemini are also supported in the LLM Playground, Prompt Experiments, and LLM-as-a-judge evaluations (changelog).