DocsSDKsJS/TSGuide

JS/TS SDK

Github repository langfuse/langfuse-jsCI test statusnpm langfusenpm langfuse-node

If you are working with Node.js, Deno, or Edge functions, the langfuse library is the simplest way to integrate Langfuse Tracing into your application. The library queues calls to make them non-blocking.

Supported runtimes:

  • Node.js
  • Edge: Vercel, Cloudflare, …
  • Deno

Want to capture data (e.g. user feedback) from the browser? Use LangfuseWeb.

Before getting started, please read the conceptual introduction to tracing.

Example

Simple example from the end-to-end example notebook:

server.ts
import { Langfuse } from "langfuse";
 
const langfuse = new Langfuse();
 
const trace = langfuse.trace({
  name: "my-AI-application-endpoint",
});
 
// Example generation creation
const generation = trace.generation({
  name: "chat-completion",
  model: "gpt-3.5-turbo",
  modelParameters: {
    temperature: 0.9,
    maxTokens: 2000,
  },
  input: messages,
});
 
// Application code
const chatCompletion = await llm.respond(prompt);
 
// End generation - sets endTime
generation.end({
  output: chatCompletion,
});

Installation

npm i langfuse
# or
yarn add langfuse
 
# Node.js < 18
npm i langfuse-node
 
# Deno
import { Langfuse } from "https://esm.sh/langfuse"
.env
LANGFUSE_SECRET_KEY="sk-lf-...";
LANGFUSE_PUBLIC_KEY="pk-lf-...";
LANGFUSE_BASEURL="https://cloud.langfuse.com"; # 🇪🇺 EU region
# LANGFUSE_BASEURL="https://us.cloud.langfuse.com"; # 🇺🇸 US region
import { Langfuse } from "langfuse"; // or "langfuse-node"
 
// without additional options
const langfuse = new Langfuse();
 
// with additional options
const langfuse = new Langfuse({
  release: "v1.0.0",
  requestTimeout: 10000,
});

Optional constructor parameters:

VariableDescriptionDefault value
releaseThe release number/hash of the application to provide analytics grouped by release.process.env.LANGFUSE_RELEASE or common system environment names
requestTimeoutTimeout in ms for requests10000
enabledSet to false to disable sending eventstrue if api keys are set, otherwise false

In short-lived environments (e.g. serverless functions), make sure to always call langfuse.shutdownAsync() at the end to await all pending requests. (Learn more)

Making calls

  • Each backend execution is logged with a single trace.
  • Each trace can contain multiple observations to log the individual steps of the execution.
    • Observations can be of different types
      • Events are the basic building block. They are used to track discrete events in a trace.
      • Spans represent durations of units of work in a trace.
      • Generations are spans which are used to log generations of AI model. They contain additional attributes about the model and the prompt/completion and are specifically rendered in the Langfuse UI.
    • Observations can be nested.

Create a trace

Traces are the top-level entity in the Langfuse API. They represent an execution flow in a LLM application usually triggered by an external event.

// Example trace creation
const trace = langfuse.trace({
  name: "chat-app-session",
  userId: "user__935d7d1d-8625-4ef4-8651-544613e7bd22",
  metadata: { user: "user@langfuse.com" },
  tags: ["production"],
});
 
// Example update, same params as create, cannot change id
trace.update({
  metadata: {
    tag: "long-running",
  },
});
 
// Properties
trace.id; // string
 
// Create observations
trace.event({});
trace.span({});
trace.generation({});
 
// Add scores
trace.score({});

langfuse.trace() takes the following parameters

parametertypeoptionaldescription
idstringyesThe id of the trace can be set, defaults to a random id. Set it to link traces to external systems or when grouping multiple runs into a single trace (e.g. messages in a chat thread).
namestringyesIdentifier of the trace. Useful for sorting/filtering in the UI.
inputobjectyesThe input of the trace. Can be any JSON object.
outputobjectyesThe output of the trace. Can be any JSON object.
metadataobjectyesAdditional metadata of the trace. Can be any JSON object. Metadata is merged when being updated via the API.object.
sessionIdstringyesUsed to group multiple traces into a session in Langfuse. Use your own session/thread identifier.
userIdstringyesThe id of the user that triggered the execution. Used to provide user-level analytics.
versionstringyesThe version of the trace type. Used to understand how changes to the trace type affect metrics. Useful in debugging.
tagsstring[]yesTags are used to categorize or label traces. Traces can be filtered by tags in the UI and GET API. Tags can also be changed in the UI. Tags are merged and never deleted via the API.
publicbooleanyesYou can make a trace public to share it via a public link. This allows others to view the trace without needing to log in or be members of your Langfuse project.

Observations

  • Events are the basic building block. They are used to track discrete events in a trace.
  • Spans represent durations of units of work in a trace.
  • Generations are spans which are used to log generations of AI models. They contain additional attributes about the model, the prompt/completion. For generations, token usage and model cost are automatically calculated.
  • Observations can be nested.

Create an Event

Events are used to track discrete events in a trace.

// Example event
const event = trace.event({
  name: "get-user-profile",
  metadata: {
    attempt: 2,
    httpRoute: "/api/retrieve-person",
  },
  input: {
    userId: "user__935d7d1d-8625-4ef4-8651-544613e7bd22",
  },
  output: {
    firstName: "Maxine",
    lastName: "Simons",
    email: "maxine.simons@langfuse.com",
  },
});
 
// Properties
event.id; // string
event.traceId; // string
event.parentObservationId; // string | undefined
 
// Create children
event.event({});
event.span({});
event.generation({});
 
// Add scores
event.score({});

*.event() takes the following parameters

parametertypeoptionaldescription
idstringyesThe id of the event can be set, defaults to a random id.
startTimeDateyesThe time at which the event started, defaults to the current time.
namestringyesIdentifier of the event. Useful for sorting/filtering in the UI.
metadataobjectyesAdditional metadata of the event. Can be any JSON object. Metadata is merged when being updated via the API.
levelstringyesThe level of the event. Can be DEBUG, DEFAULT, WARNING or ERROR. Used for sorting/filtering of traces with elevated error levels and for highlighting in the UI.
statusMessagestringyesThe status message of the event. Additional field for context of the event. E.g. the error message of an error event.
inputobjectyesThe input to the event. Can be any JSON object.
outputobjectyesThe output to the event. Can be any JSON object.
versionstringyesThe version of the event type. Used to understand how changes to the event type affect metrics. Useful in debugging.

Create a Span

Spans represent durations of units of work in a trace. We generated convenient SDK functions for generic spans as well as LLM spans.

// Example span creation
const span = trace.span({
  name: "embedding-retrieval",
  input: {
    userInput: "How does Langfuse work?",
  },
});
 
// Example update
span.update({
  metadata: {
    httpRoute: "/api/retrieve-doc",
    embeddingModel: "bert-base-uncased",
  },
});
 
// Application code
const retrievedDocs = await retrieveDoc("How does Langfuse work?");
 
// Example end - sets endTime, optionally pass a body
span.end({
  output: {
    retrievedDocs,
  },
});
 
// Properties
span.id; // string
span.traceId; // string
span.parentObservationId; // string | undefined
 
// Create children
span.event({});
span.span({});
span.generation({});
 
// Add scores
span.score({});

*.span() takes the following parameters

parametertypeoptionaldescription
idstringyesThe id of the span can be set, otherwise a random id is generated.
startTimeDateyesThe time at which the span started, defaults to the current time.
endTimeDateyesThe time at which the span ended.
namestringyesIdentifier of the span. Useful for sorting/filtering in the UI.
metadataobjectyesAdditional metadata of the span. Can be any JSON object. Metadata is merged when being updated via the API.
levelstringyesThe level of the span. Can be DEBUG, DEFAULT, WARNING or ERROR. Used for sorting/filtering of traces with elevated error levels and for highlighting in the UI.
statusMessagestringyesThe status message of the span. Additional field for context of the event. E.g. the error message of an error event.
inputobjectyesThe input to the span. Can be any JSON object.
outputobjectyesThe output to the span. Can be any JSON object.
versionstringyesThe version of the span type. Used to understand how changes to the span type affect metrics. Useful in debugging.

Create a Generation

Generations are used to log generations of AI model. They contain additional attributes about the model and the prompt/completion and are specifically rendered in the Langfuse UI.

// Example generation creation
const generation = trace.generation({
  name: "chat-completion",
  model: "gpt-3.5-turbo",
  modelParameters: {
    temperature: 0.9,
    maxTokens: 2000,
  },
  input: messages,
});
 
// Application code
const chatCompletion = await llm.respond(prompt);
 
// Example update
generation.update({
  completionStartTime: new Date(),
});
 
// Example end - sets endTime, optionally pass a body
generation.end({
  output: chatCompletion,
});
 
// Properties
generation.id; // string
generation.traceId; // string
generation.parentObservationId; // string | undefined
 
// Create children
generation.event({});
generation.span({});
generation.generation({});
 
// Add scores
generation.score({});

*.generation() takes the following parameters

parametertypeoptionaldescription
idstringyesThe id of the generation can be set, defaults to random id.
namestringyesIdentifier of the generation. Useful for sorting/filtering in the UI.
startTimeDateyesThe time at which the generation started, defaults to the current time.
completionStartTimeDateyesThe time at which the completion started (streaming). Set it to get latency analytics broken down into time until completion started and completion duration.
endTimeDateyesThe time at which the generation ended.
modelstringyesThe name of the model used for the generation.
modelParametersobjectyesThe parameters of the model used for the generation; can be any key-value pairs.
inputobjectyesThe input to the generation - the prompt. Can be any JSON object or string.
outputobjectyesThe output to the generation - the completion. Can be any JSON object or string.
usageobjectyesThe usage object supports the OpenAi structure with (promptTokens, completionTokens, totalTokens) and a more generic version (input, output, total, unit, inputCost, outputCost, totalCost) where unit can be of value "TOKENS", "CHARACTERS", "MILLISECONDS", "SECONDS", "IMAGES". Refer to the docs on how to automatically calculate tokens and costs by Langfuse.
metadataobjectyesAdditional metadata of the generation. Can be any JSON object. Metadata is merged when being updated via the API.
levelstringyesThe level of the generation. Can be DEBUG, DEFAULT, WARNING or ERROR. Used for sorting/filtering of traces with elevated error levels and for highlighting in the UI.
statusMessagestringyesThe status message of the generation. Additional field for context of the event. E.g. the error message of an error event.
versionstringyesThe version of the generation type. Used to understand how changes to the generation type affect metrics. Reflects e.g. the version of a prompt.
promptLangfuse promptyesPass the prompt fetched from Langfuse Prompt Management via langfuse.getPrompt() in order to link the generation to a specific prompt version for analytics in Langfuse.

Nesting of observations

Nesting of observations (spans, events, generations) is helpful to structure the trace in a hierarchical way.

# Simple example; there are no limits to how you nest observations
- trace: chat-app-session
  - span: chat-interaction
    - event: get-user-profile
    - generation: chat-completion

There are two options to nest observations:

const trace = langfuse.trace({ name: "chat-app-session" });
 
const span = trace.span({ name: "chat-interaction" });
 
span.event({ name: "get-user-profile" });
span.generation({ name: "chat-completion" });

Create score

Scores are used to evaluate executions/traces. They are attached to a single trace. If the score relates to a specific step of the trace, the score can optionally also be attached to the observation to enable evaluating it specifically.

Links

await langfuse.score({
  traceId: message.traceId,
  observationId: message.generationId,
  name: "quality",
  value: 1,
  comment: "Factually correct",
});
 
// alternatively
trace.score({});
span.score({});
event.score({});
generation.score({});
parametertypeoptionaldescription
traceIdstringnoThe id of the trace to which the score should be attached. Automatically set if you use {trace,generation,span,event}.score({})
observationIdstringyesThe id of the observation to which the score should be attached. Automatically set if you use {generation,span,event}.score({})
namestringnoIdentifier of the score.
valuenumbernoThe value of the score. Can be any number, often standardized to 0..1
commentstringyesAdditional context/explanation of the score.

Shutdown

The Langfuse SDKs sends events asynchronously to the Langfuse server. You should call shutdown to exit cleanly before your application exits.

langfuse.shutdown();
// or
await langfuse.shutdownAsync();

Debugging

Issues with the SDKs can be caused by various reasons ranging from incorrectly configured API keys to network issues.

The SDK does not throw errors to protect your application process. Instead, you can optionally listen to errors:

langfuse.on("error", (error) => {
  // Whatever you want to do with the error
  console.error(error);
});

Alternatively, you can enable debugging to get detailed logs of what’s happening in the SDK.

langfuse.debug();

Short lived execution environments (lambda, serverless, Vercel, Cloudflare)

The SDK is optimize to run in the background to queue and flush all requests. Events can get lost if the process exits before all requests are flushed. To ensure all events are sent, use one of the following patterns:

Option 1: Waiting for flushAsync but returning immediately

Some platforms/frameworks support waiting for promises after the response but before exiting the process. This is the preferred way to ensure all events are sent without blocking the process.

Note: Most of these execution environments have a timeout after which the process is killed. Some of them lack observability to monitor these dangling promises.

  • Cloudflare workers: waitUntil (mdn, example)

  • Vercel (e.g. NextJs): waitUntil

    npm i @vercel/functions
    import { waitUntil } from "@vercel/functions";
    // within the api handler
    waitUntil(langfuse.flushAsync());

Option 2: shutdownAsync

When the process exits use await langfuse.shutdownAsync() to make sure all requests are flushed and pending requests are awaited. Call this once at the end of your process.

Example:

const langfuse = new Langfuse({
  secretKey: "sk-lf-...",
  publicKey: "pk-lf-...",
})
 
export const handler() {
  const trace = langfuse.trace({ name: "chat-app-session" });
 
  trace.event({ name: "get-user-profile" });
  const span = trace.span({ name: "chat-interaction" });
  span.generation({ name: "chat-completion", model: "gpt-3.5-turbo", input: prompt, output: completion });
 
  // So far all requests are queued
 
  // Now we want to flush and await all pending requests before the process exits
  await langfuse.shutdownAsync();
}

Upgrading from v2.x.x to v3.x.x

This release includes breaking changes only for users of the Langchain JS integration. The upgrade is non-breaking for all other parts of the SDK.

Upgrading from v1.x.x to v2.x.x

You can automatically migrate your codebase using grit, either online or with the following CLI command:


npx @getgrit/launcher apply langfuse_node_v2

The grit binary executes entirely locally with AST-based transforms. Be sure to audit its changes: we suggest ensuring you have a clean working tree beforehand, and running git add --patch afterwards.

Dropped support for Node.js < 16

As most of our users are using a modern JS/TS stack, we decided to drop support for Node < 16.

Rename prompt and completion to input and output

To ensure consistency throughout Langfuse, we have renamed the prompt and completion parameters in the generation function to input and output, respectively. This change brings them in line with the rest of the Langfuse API.

More generalized usage object

We improved the flexibility of the SDK by allowing you to ingest any type of usage while still supporting the OpenAI-style usage object.

v1.x.x

langfuse.generation({
  name: "chat-completion",
  usage = {
    promptTokens: 50,
    completionTokens: 49,
    totalTokens: 99,
  },
});

v2.x.x

The usage object supports the OpenAi structure with {'promptTokens', 'completionTokens', 'totalTokens'} and a more generic version {'input', 'output', 'total', 'unit'} where unit can be of value "TOKENS", "CHARACTERS", "MILLISECONDS", "SECONDS", "IMAGES". For some models the token counts are automatically calculated by Langfuse. Create an issue to request support for other units and models.

// Generic style
langfuse.generation({
  name = "my-claude-generation",
  usage = {
    input: 50,
    output: 49,
    total: 99,
    unit: "TOKENS",
  },
});
 
// OpenAI style
langfuse.generation({
  name = "my-openai-generation",
  usage = {
    promptTokens: 50,
    completionTokens: 49,
    totalTokens: 99,
  }, // defaults to "TOKENS" unit
});
 
// set ((input and/or output) or total), total is calculated automatically if not set

Upgrading from v0.x to v1.x

Deprecation of externalTraceId

We deprecated the external trace id to simplify the API. Instead, you can now (optionally) directly set the trace id when creating the trace. Traces are still upserted in case a trace with this id already exists in your project.

// v0.x
const trace = langfuse.trace({ externalId: "123" });
// When manually linking observations and scores to the trace
const span = langfuse.span({ traceId: "123", traceIdType: "EXTERNAL" });
const score = langfuse.score({ traceId: "123", traceIdType: "EXTERNAL" });
 
// v1.x
const trace = langfuse.trace({ id: "123" });
// When manually linking observations and scores to the trace
const span = langfuse.span({ traceId: "123" });
const score = langfuse.score({ traceId: "123" });

Changes

  • The traceIdType property is deprecated
  • The externalId property on traces is deprecated

Ingestion of externalIds via older versions of the SDK or the API is still supported. However, we will remove support for this in the future and migrate all existing traces to the new format. We monitor the usage of deprecated properties on Langfuse Cloud and will reach out to you if we detect that you are still using them before a breaking change is introduced.

Introduction of shutdownAsync

With v1.0.0 we introduced the shutdownAsync method to make sure all requests are flushed and pending requests are awaited before the process exits. flush is still available but does not await pending requests that are already flushed.

This is especially important for short-lived execution environments such as lambdas and serverless functions.

export const handler() {
  // Lambda / serverless function
 
  // v0.x
  await langfuse.flush();
 
  // v1.x
  await langfuse.shutdownAsync();
}

Example

We integrated the Typescript SDK into the Vercel AI Chatbot project. Check out the blog post for screenshots and detailed explanations of the inner workings of the integration. The project includes:

  • Streamed responses from OpenAI
  • Conversations
  • Collection of user feedback on individual messages using the Web SDK

Was this page useful?

Questions? We're here to help

Subscribe to updates