Skip to main content
LLM-as-a-judge scorers use a language model to evaluate outputs based on natural language criteria. They are best for subjective judgments like tone, helpfulness, or creativity that are difficult to encode in deterministic code. You can define LLM-as-a-judge scorers in three places:
  • Inline in SDK code: Define scorers directly in your evaluation scripts for local development or application-specific logic.
  • Pushed via CLI: Define scorers in TypeScript or Python files and push them to Braintrust for team-wide sharing and automatic evaluation of production logs.
  • Created in UI: Build scorers in the Braintrust web interface for rapid prototyping and simple configurations.
Most teams prototype in the UI, then push production-ready scorers via the CLI. See Scorers overview for guidance.

Score spans

Span-level scorers evaluate individual operations or outputs. Use them for measuring single LLM responses, checking specific tool calls, or validating individual outputs. Each matching span receives an independent score. Your prompt template can reference these variables:
  • {{input}}: The input to your task
  • {{output}}: The output from your task
  • {{expected}}: The expected output (optional)
  • {{metadata}}: Custom metadata from the test case
Use scorers inline in your evaluation code:
llm_scorer.eval.ts
import { Eval } from "braintrust";
import { LLMClassifierFromTemplate } from "autoevals";
import OpenAI from "openai";

const client = new OpenAI();

const MOVIE_DATASET = [
  {
    input:
      "A detective investigates a series of murders based on the seven deadly sins.",
    expected: "Se7en",
  },
  {
    input:
      "A thief who steals corporate secrets through the use of dream-sharing technology is given the inverse task of planting an idea into the mind of a C.E.O.",
    expected: "Inception",
  },
];

async function task(input: string): Promise<string> {
  const response = await client.responses.create({
    model: "gpt-5-mini",
    input: [
      {
        role: "system",
        content:
          "Based on the following description, identify the movie. Reply with only the movie title.",
      },
      { role: "user", content: input },
    ],
  });
  return response.output_text ?? "";
}

const correctnessScorer = LLMClassifierFromTemplate({
  name: "Correctness",
  promptTemplate: `You are evaluating a movie-identification task.

Output (model's answer): {{output}}
Expected (correct movie): {{expected}}

Does the output correctly identify the same movie as the expected answer?
Consider alternate titles (e.g. "Harry Potter 1" vs "Harry Potter and the Sorcerer's Stone") as correct.

Return only "correct" if the output is the right movie (exact or equivalent title).
Return only "incorrect" otherwise.`,
  choiceScores: {
    correct: 1,
    incorrect: 0,
  },
  useCoT: true,
});

Eval("Movie Matcher", {
  data: MOVIE_DATASET,
  task,
  scores: [correctnessScorer],
});

Score traces

Trace-level scorers evaluate entire execution traces including all spans and conversation history. Use these for assessing multi-turn conversation quality, overall workflow completion, or when your scorer needs access to the full execution context. The scorer runs once per trace. Your prompt template can reference the {{thread}} variable, which provides the full conversation formatted as human-readable text. input, output, expected, and metadata are automatically populated from the root span of the trace.
Trace-level scoring requires TypeScript SDK v2.2.1+, Python SDK v0.5.6+, or Ruby SDK v0.2.1+.
Use scorers inline in your evaluation code:
trace_llm_scorer.eval.ts
import { Eval, wrapOpenAI, wrapTraced, type Scorer } from "braintrust";
import OpenAI from "openai";

const client = new OpenAI();
const wrappedClient = wrapOpenAI(new OpenAI());

const SUPPORT_DATASET = [
  { input: "My order hasn't arrived yet. Order #12345." },
  { input: "I need help resetting my password." },
];

const callLLM = wrapTraced(async function callLLM(messages: Array<{ role: string; content: string }>) {
  const response = await wrappedClient.chat.completions.create({
    model: "gpt-5-mini",
    messages,
  });
  return response.choices[0].message.content || "";
});

async function supportTask(input: string): Promise<string> {
  const messages: Array<{ role: string; content: string }> = [
    { role: "system", content: "You are a helpful customer support agent." }
  ];

  messages.push({ role: "user", content: input });
  const response1 = await callLLM(messages);
  messages.push({ role: "assistant", content: response1 });

  messages.push({ role: "user", content: "Can you provide more details?" });
  const response2 = await callLLM(messages);
  messages.push({ role: "assistant", content: response2 });

  messages.push({ role: "user", content: "Thank you for your help!" });
  const response3 = await callLLM(messages);

  return response3;
}

const conversationCoherence: Scorer = async ({ trace }) => {
  if (!trace) return null;

  const thread = await trace.getThread();
  const threadText = thread
    .map(msg => `${msg.role}: ${msg.content}`)
    .join("\n\n");

  const response = await client.responses.create({
    model: "gpt-5-mini",
    input: [
      {
        role: "user",
        content: `Evaluate the coherence of this customer support conversation:

${threadText}

Rate the conversation coherence:
- "A" for highly coherent with natural flow and consistent context
- "B" for mostly coherent with minor gaps or context issues
- "C" for incoherent, disjointed, or lost context

Return only the letter (A, B, or C).`,
      },
    ],
  });

  const rating = response.output_text?.trim().toUpperCase() || "C";
  const choiceScores = { A: 1, B: 0.6, C: 0 };
  const score = choiceScores[rating as keyof typeof choiceScores] ?? 0;

  return {
    name: "Conversation coherence",
    score,
    metadata: { rating, thread_length: thread.length },
  };
};

Eval("Support Conversation Quality", {
  data: SUPPORT_DATASET,
  task: supportTask,
  scores: [conversationCoherence],
});

Set pass thresholds

Define minimum acceptable scores to automatically mark results as passing or failing. When configured, scores that meet or exceed the threshold are marked as passing (green highlighting with checkmark), while scores below are marked as failing (red highlighting).
Add __pass_threshold to the scorer’s metadata (value between 0 and 1):
project.scorers.create({
  name: "Helpfulness scorer",
  slug: "helpfulness-scorer",
  messages: [
    {
      role: "user",
      content: 'Rate the helpfulness of this response: {{output}}\n\nReturn "A" for very helpful, "B" for somewhat helpful, "C" for not helpful.',
    },
  ],
  model: "gpt-5-mini",
  choiceScores: { A: 1, B: 0.5, C: 0 },
  metadata: {
    __pass_threshold: 0.7,
  },
});

Next steps