fastify-x-ai
generate
Full-control text generation with prompt or message history, tool calling, and per-call model overrides.
generate
Full-control text generation. Accepts a plain prompt string or a messages array (chat history), optional tool definitions, and per-call provider/model overrides.
Signature
fastify.xai.generate(params: GenerateParams): Promise<GenerateResult>
interface GenerateParams {
prompt?: string
messages?: Array<{ role: "user" | "assistant" | "system"; content: string }>
system?: string
provider?: "openai" | "anthropic" | "google"
model?: string
maxTokens?: number
temperature?: number
tools?: Record<string, ToolDefinition>
maxSteps?: number
output?: Output // structured output — use generateStructured() instead
}
interface GenerateResult {
text: string
content: Array<ContentPart>
toolCalls: Array<ToolCall>
toolResults: Array<ToolResult>
finishReason: string
usage: { promptTokens: number; completionTokens: number; totalTokens: number }
totalUsage: UsageObject
steps: Array<StepObject>
response: ResponseObject
warnings: Array<Warning>
}
Params
| Name | Type | Required | Description |
|---|---|---|---|
prompt | string | One of prompt/messages | Plain-text prompt |
messages | Array | One of prompt/messages | Chat message array [{ role, content }] |
system | string | No | System message prepended to the conversation |
provider | string | No | Override default provider (openai, anthropic, google) |
model | string | No | Override default model for this call |
maxTokens | number | No | Override defaultMaxTokens (default 4096) |
temperature | number | No | Override defaultTemperature (default 0.7) |
tools | object | No | Tool definitions for function calling (see examples) |
maxSteps | number | No | Maximum tool execution steps (required when using tools) |
Returns
A GenerateResult object. The most-used fields:
| Field | Type | Description |
|---|---|---|
text | string | Final generated text |
toolCalls | Array | Tool invocations the model made |
toolResults | Array | Results returned by tool execute functions |
finishReason | string | "stop", "tool-calls", "length", etc. |
usage | object | Token usage for this call |
Throws
| Error | When |
|---|---|
xAI generate: Either 'prompt' or 'messages' is required | Neither prompt nor messages provided |
xAI: Provider '…' not configured | Specified or default provider has no API key |
Examples
Basic prompt
const result = await fastify.xai.generate({
prompt: "Explain quantum computing in two sentences.",
system: "You are a physics professor.",
maxTokens: 200,
});
console.log(result.text);
// "Quantum computing uses quantum bits (qubits)..."
console.log(result.usage);
// { promptTokens: 24, completionTokens: 47, totalTokens: 71 }
Tool calling — weather lookup
import { z } from "zod";
const result = await fastify.xai.generate({
prompt: "What's the weather like in San Francisco?",
tools: {
getWeather: {
description: "Get current weather for a city",
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => {
// call a real weather API here
return { temperature: 72, condition: "sunny", city };
},
},
},
maxSteps: 3,
});
console.log(result.text);
// "The weather in San Francisco is 72°F and sunny."
console.log(result.toolCalls[0]);
// { toolName: "getWeather", args: { city: "San Francisco" } }
Per-call provider override
// Use Anthropic for a single call even if defaultProvider is "openai"
const result = await fastify.xai.generate({
prompt: "Summarize the key ideas of stoicism.",
provider: "anthropic",
model: "claude-sonnet-4-20250514",
maxTokens: 300,
});
See Also
- stream — Streaming variant of
generate - chat — Convenience wrapper for message-history conversations
- generateStructured — Zod-schema-validated structured output
AI Context
package: "@xenterprises/fastify-x-ai"
method: fastify.xai.generate(params)
use-when: Full-control text generation with prompt or messages, tool calling, and per-call model/provider overrides
params: prompt or messages (required), model, provider, maxTokens, temperature, tools, system
returns: { text, usage, toolCalls, toolResults, finishReason }
