X Enterprises
fastify-x-ai

stream

Streaming text generation that returns an async-iterable textStream for real-time output.

stream

Streaming text generation. Accepts the same params as generate plus lifecycle callbacks. Returns the raw streamText result from the Vercel AI SDK — iterate result.textStream to consume chunks.

Signature

fastify.xai.stream(params: StreamParams): Promise<StreamResult>

interface StreamParams {
  prompt?: string
  messages?: Array<{ role: "user" | "assistant" | "system"; content: string }>
  system?: string
  provider?: "openai" | "anthropic" | "google"
  model?: string
  maxTokens?: number
  temperature?: number
  tools?: Record<string, ToolDefinition>
  maxSteps?: number
  onChunk?: (event: { chunk: Chunk }) => void
  onFinish?: (event: { text: string; usage: UsageObject }) => void
  onError?: (event: { error: Error }) => void
}

Params

NameTypeRequiredDescription
promptstringOne of prompt/messagesPlain-text prompt
messagesArrayOne of prompt/messagesChat message array [{ role, content }]
systemstringNoSystem message
providerstringNoOverride default provider
modelstringNoOverride default model
maxTokensnumberNoOverride defaultMaxTokens
temperaturenumberNoOverride defaultTemperature
toolsobjectNoTool definitions
maxStepsnumberNoMax tool execution steps
onChunkfunctionNoCalled for each streamed chunk
onFinishfunctionNoCalled when stream completes with { text, usage }
onErrorfunctionNoCalled on stream error

Returns

The raw StreamResult from streamText. Key properties:

PropertyTypeDescription
textStreamAsyncIterable<string>Async iterator of text chunks
textPromise<string>Full text once stream completes
usagePromise<UsageObject>Token usage once stream completes

Throws

ErrorWhen
xAI stream: Either 'prompt' or 'messages' is requiredNeither prompt nor messages provided
xAI: Provider '…' not configuredSpecified or default provider has no API key

Examples

SSE streaming in a Fastify route

fastify.post("/chat/stream", async (request, reply) => {
  reply.raw.setHeader("Content-Type", "text/plain; charset=utf-8");

  const result = await fastify.xai.stream({
    messages: request.body.messages,
    system: "You are a helpful assistant.",
  });

  for await (const chunk of result.textStream) {
    reply.raw.write(chunk);
  }

  reply.raw.end();
});

Lifecycle callbacks with token tracking

let totalTokens = 0;

const result = await fastify.xai.stream({
  prompt: "Write a short story about a robot who dreams.",
  onChunk: ({ chunk }) => {
    // chunk.type is "text-delta" for text chunks
    process.stdout.write(chunk.textDelta ?? "");
  },
  onFinish: ({ text, usage }) => {
    totalTokens = usage.totalTokens;
    console.log(`\nFinished — ${totalTokens} tokens`);
  },
  onError: ({ error }) => {
    fastify.log.error({ err: error }, "Stream error");
  },
});

// Alternatively await the full text from the promise
const fullText = await result.text;

See Also

  • generate — Non-streaming variant; same params
  • chat — Pass stream: true to delegate to stream()

AI Context

package: "@xenterprises/fastify-x-ai"
method: fastify.xai.stream(params)
use-when: Streaming text generation — returns an async-iterable textStream for SSE/streaming responses
params: prompt or messages (required), model, provider, maxTokens, temperature, system
returns: { textStream (AsyncIterable<string>), text (Promise<string>), usage }
Copyright © 2026