X Enterprises
fastify-x-ai

generate

Full-control text generation with prompt or message history, tool calling, and per-call model overrides.

generate

Full-control text generation. Accepts a plain prompt string or a messages array (chat history), optional tool definitions, and per-call provider/model overrides.

Signature

fastify.xai.generate(params: GenerateParams): Promise<GenerateResult>

interface GenerateParams {
  prompt?: string
  messages?: Array<{ role: "user" | "assistant" | "system"; content: string }>
  system?: string
  provider?: "openai" | "anthropic" | "google"
  model?: string
  maxTokens?: number
  temperature?: number
  tools?: Record<string, ToolDefinition>
  maxSteps?: number
  output?: Output         // structured output — use generateStructured() instead
}

interface GenerateResult {
  text: string
  content: Array<ContentPart>
  toolCalls: Array<ToolCall>
  toolResults: Array<ToolResult>
  finishReason: string
  usage: { promptTokens: number; completionTokens: number; totalTokens: number }
  totalUsage: UsageObject
  steps: Array<StepObject>
  response: ResponseObject
  warnings: Array<Warning>
}

Params

NameTypeRequiredDescription
promptstringOne of prompt/messagesPlain-text prompt
messagesArrayOne of prompt/messagesChat message array [{ role, content }]
systemstringNoSystem message prepended to the conversation
providerstringNoOverride default provider (openai, anthropic, google)
modelstringNoOverride default model for this call
maxTokensnumberNoOverride defaultMaxTokens (default 4096)
temperaturenumberNoOverride defaultTemperature (default 0.7)
toolsobjectNoTool definitions for function calling (see examples)
maxStepsnumberNoMaximum tool execution steps (required when using tools)

Returns

A GenerateResult object. The most-used fields:

FieldTypeDescription
textstringFinal generated text
toolCallsArrayTool invocations the model made
toolResultsArrayResults returned by tool execute functions
finishReasonstring"stop", "tool-calls", "length", etc.
usageobjectToken usage for this call

Throws

ErrorWhen
xAI generate: Either 'prompt' or 'messages' is requiredNeither prompt nor messages provided
xAI: Provider '…' not configuredSpecified or default provider has no API key

Examples

Basic prompt

const result = await fastify.xai.generate({
  prompt: "Explain quantum computing in two sentences.",
  system: "You are a physics professor.",
  maxTokens: 200,
});

console.log(result.text);
// "Quantum computing uses quantum bits (qubits)..."
console.log(result.usage);
// { promptTokens: 24, completionTokens: 47, totalTokens: 71 }

Tool calling — weather lookup

import { z } from "zod";

const result = await fastify.xai.generate({
  prompt: "What's the weather like in San Francisco?",
  tools: {
    getWeather: {
      description: "Get current weather for a city",
      parameters: z.object({ city: z.string() }),
      execute: async ({ city }) => {
        // call a real weather API here
        return { temperature: 72, condition: "sunny", city };
      },
    },
  },
  maxSteps: 3,
});

console.log(result.text);
// "The weather in San Francisco is 72°F and sunny."
console.log(result.toolCalls[0]);
// { toolName: "getWeather", args: { city: "San Francisco" } }

Per-call provider override

// Use Anthropic for a single call even if defaultProvider is "openai"
const result = await fastify.xai.generate({
  prompt: "Summarize the key ideas of stoicism.",
  provider: "anthropic",
  model: "claude-sonnet-4-20250514",
  maxTokens: 300,
});

See Also

  • stream — Streaming variant of generate
  • chat — Convenience wrapper for message-history conversations
  • generateStructured — Zod-schema-validated structured output

AI Context

package: "@xenterprises/fastify-x-ai"
method: fastify.xai.generate(params)
use-when: Full-control text generation with prompt or messages, tool calling, and per-call model/provider overrides
params: prompt or messages (required), model, provider, maxTokens, temperature, tools, system
returns: { text, usage, toolCalls, toolResults, finishReason }
Copyright © 2026