fastify-x-ai
stream
Streaming text generation that returns an async-iterable textStream for real-time output.
stream
Streaming text generation. Accepts the same params as generate plus lifecycle callbacks. Returns the raw streamText result from the Vercel AI SDK — iterate result.textStream to consume chunks.
Signature
fastify.xai.stream(params: StreamParams): Promise<StreamResult>
interface StreamParams {
prompt?: string
messages?: Array<{ role: "user" | "assistant" | "system"; content: string }>
system?: string
provider?: "openai" | "anthropic" | "google"
model?: string
maxTokens?: number
temperature?: number
tools?: Record<string, ToolDefinition>
maxSteps?: number
onChunk?: (event: { chunk: Chunk }) => void
onFinish?: (event: { text: string; usage: UsageObject }) => void
onError?: (event: { error: Error }) => void
}
Params
| Name | Type | Required | Description |
|---|---|---|---|
prompt | string | One of prompt/messages | Plain-text prompt |
messages | Array | One of prompt/messages | Chat message array [{ role, content }] |
system | string | No | System message |
provider | string | No | Override default provider |
model | string | No | Override default model |
maxTokens | number | No | Override defaultMaxTokens |
temperature | number | No | Override defaultTemperature |
tools | object | No | Tool definitions |
maxSteps | number | No | Max tool execution steps |
onChunk | function | No | Called for each streamed chunk |
onFinish | function | No | Called when stream completes with { text, usage } |
onError | function | No | Called on stream error |
Returns
The raw StreamResult from streamText. Key properties:
| Property | Type | Description |
|---|---|---|
textStream | AsyncIterable<string> | Async iterator of text chunks |
text | Promise<string> | Full text once stream completes |
usage | Promise<UsageObject> | Token usage once stream completes |
Throws
| Error | When |
|---|---|
xAI stream: Either 'prompt' or 'messages' is required | Neither prompt nor messages provided |
xAI: Provider '…' not configured | Specified or default provider has no API key |
Examples
SSE streaming in a Fastify route
fastify.post("/chat/stream", async (request, reply) => {
reply.raw.setHeader("Content-Type", "text/plain; charset=utf-8");
const result = await fastify.xai.stream({
messages: request.body.messages,
system: "You are a helpful assistant.",
});
for await (const chunk of result.textStream) {
reply.raw.write(chunk);
}
reply.raw.end();
});
Lifecycle callbacks with token tracking
let totalTokens = 0;
const result = await fastify.xai.stream({
prompt: "Write a short story about a robot who dreams.",
onChunk: ({ chunk }) => {
// chunk.type is "text-delta" for text chunks
process.stdout.write(chunk.textDelta ?? "");
},
onFinish: ({ text, usage }) => {
totalTokens = usage.totalTokens;
console.log(`\nFinished — ${totalTokens} tokens`);
},
onError: ({ error }) => {
fastify.log.error({ err: error }, "Stream error");
},
});
// Alternatively await the full text from the promise
const fullText = await result.text;
See Also
AI Context
package: "@xenterprises/fastify-x-ai"
method: fastify.xai.stream(params)
use-when: Streaming text generation — returns an async-iterable textStream for SSE/streaming responses
params: prompt or messages (required), model, provider, maxTokens, temperature, system
returns: { textStream (AsyncIterable<string>), text (Promise<string>), usage }
