fastify-x-ai
chat
Chat with conversation history — delegates to generate() or stream() based on the stream flag.
chat
Convenience method for multi-turn conversations. Requires a messages array; all other params are identical to generate. Pass stream: true to delegate to stream() instead.
Signature
fastify.xai.chat(params: ChatParams): Promise<GenerateResult | StreamResult>
interface ChatParams {
messages: Array<{ role: "user" | "assistant" | "system"; content: string }>
system?: string
stream?: boolean
provider?: "openai" | "anthropic" | "google"
model?: string
maxTokens?: number
temperature?: number
tools?: Record<string, ToolDefinition>
maxSteps?: number
}
Params
| Name | Type | Required | Description |
|---|---|---|---|
messages | Array | Yes | Conversation history [{ role, content }] |
system | string | No | System message (prepended to the conversation) |
stream | boolean | No | true to get a streaming result; default false |
provider | string | No | Override default provider |
model | string | No | Override default model |
maxTokens | number | No | Override defaultMaxTokens |
temperature | number | No | Override defaultTemperature |
tools | object | No | Tool definitions |
maxSteps | number | No | Max tool execution steps |
Returns
- Without
stream: true→ same asgenerateresult:{ text, toolCalls, usage, … } - With
stream: true→ same asstreamresult:{ textStream, text, usage, … }
Throws
| Error | When |
|---|---|
xAI chat: 'messages' is required | messages is missing or falsy |
xAI: Provider '…' not configured | Specified or default provider has no API key |
Examples
Multi-turn conversation
fastify.post("/chat", async (request, reply) => {
const { messages } = request.body; // full history from the client
const result = await fastify.xai.chat({
messages,
system: "You are a helpful assistant. Keep answers concise.",
});
return { text: result.text, usage: result.usage };
});
Streaming multi-turn chat
fastify.post("/chat/stream", async (request, reply) => {
reply.raw.setHeader("Content-Type", "text/plain; charset=utf-8");
const result = await fastify.xai.chat({
messages: request.body.messages,
stream: true,
});
for await (const chunk of result.textStream) {
reply.raw.write(chunk);
}
reply.raw.end();
});
See Also
- generate — What
chat()delegates to whenstream: false - stream — What
chat()delegates to whenstream: true - complete — Single-prompt convenience returning a plain string
AI Context
package: "@xenterprises/fastify-x-ai"
method: fastify.xai.chat(params)
use-when: Chat with conversation history — delegates to generate() (non-streaming) or stream() (streaming)
params: messages (required, array of {role, content}), stream, model, provider, maxTokens, temperature, system
returns: non-streaming → { text, usage } | streaming → { textStream, text, usage }
