X Enterprises
fastify-x-ai

chat

Chat with conversation history — delegates to generate() or stream() based on the stream flag.

chat

Convenience method for multi-turn conversations. Requires a messages array; all other params are identical to generate. Pass stream: true to delegate to stream() instead.

Signature

fastify.xai.chat(params: ChatParams): Promise<GenerateResult | StreamResult>

interface ChatParams {
  messages: Array<{ role: "user" | "assistant" | "system"; content: string }>
  system?: string
  stream?: boolean
  provider?: "openai" | "anthropic" | "google"
  model?: string
  maxTokens?: number
  temperature?: number
  tools?: Record<string, ToolDefinition>
  maxSteps?: number
}

Params

NameTypeRequiredDescription
messagesArrayYesConversation history [{ role, content }]
systemstringNoSystem message (prepended to the conversation)
streambooleanNotrue to get a streaming result; default false
providerstringNoOverride default provider
modelstringNoOverride default model
maxTokensnumberNoOverride defaultMaxTokens
temperaturenumberNoOverride defaultTemperature
toolsobjectNoTool definitions
maxStepsnumberNoMax tool execution steps

Returns

  • Without stream: true → same as generate result: { text, toolCalls, usage, … }
  • With stream: true → same as stream result: { textStream, text, usage, … }

Throws

ErrorWhen
xAI chat: 'messages' is requiredmessages is missing or falsy
xAI: Provider '…' not configuredSpecified or default provider has no API key

Examples

Multi-turn conversation

fastify.post("/chat", async (request, reply) => {
  const { messages } = request.body; // full history from the client

  const result = await fastify.xai.chat({
    messages,
    system: "You are a helpful assistant. Keep answers concise.",
  });

  return { text: result.text, usage: result.usage };
});

Streaming multi-turn chat

fastify.post("/chat/stream", async (request, reply) => {
  reply.raw.setHeader("Content-Type", "text/plain; charset=utf-8");

  const result = await fastify.xai.chat({
    messages: request.body.messages,
    stream: true,
  });

  for await (const chunk of result.textStream) {
    reply.raw.write(chunk);
  }

  reply.raw.end();
});

See Also

  • generate — What chat() delegates to when stream: false
  • stream — What chat() delegates to when stream: true
  • complete — Single-prompt convenience returning a plain string

AI Context

package: "@xenterprises/fastify-x-ai"
method: fastify.xai.chat(params)
use-when: Chat with conversation history — delegates to generate() (non-streaming) or stream() (streaming)
params: messages (required, array of {role, content}), stream, model, provider, maxTokens, temperature, system
returns: non-streaming → { text, usage } | streaming → { textStream, text, usage }
Copyright © 2026