fastify-x-ai
getModel / raw
Get a raw AI SDK model instance or access the underlying SDK functions directly via fastify.xai.raw.
getModel / raw
getModel returns a raw AI SDK model instance for use with the underlying SDK functions. fastify.xai.raw exposes those functions (generateText, streamText, embed, embedMany, cosineSimilarity) directly for advanced use cases.
Signatures
fastify.xai.getModel(provider?: string, model?: string): ModelInstance
fastify.xai.raw: {
generateText: Function // from "ai"
streamText: Function // from "ai"
embed: Function // from "ai"
embedMany: Function // from "ai"
cosineSimilarity: Function // from "ai"
}
Params — getModel
| Name | Type | Required | Description |
|---|---|---|---|
provider | string | No | Provider name; defaults to config.defaultProvider |
model | string | No | Model name; defaults to the per-provider default |
Returns
getModel — a model instance compatible with Vercel AI SDK functions.
raw — an object of raw AI SDK functions, already bound to the imported ai package.
Throws
| Error | When |
|---|---|
xAI: Provider '…' not configured. Available: … | Provider is not initialized (no API key found at registration) |
Examples
getModel — use with a custom AI SDK call
import { generateText } from "ai"; // or use fastify.xai.raw.generateText
fastify.get("/poem", async (request, reply) => {
const model = fastify.xai.getModel("openai", "gpt-4o-mini");
const { text } = await fastify.xai.raw.generateText({
model,
prompt: "Write a two-line poem about Fastify.",
maxTokens: 60,
});
return { poem: text };
});
raw — advanced streaming with the SDK directly
fastify.post("/advanced-stream", async (request, reply) => {
const { streamText } = fastify.xai.raw;
const model = fastify.xai.getModel("anthropic", "claude-sonnet-4-20250514");
const result = streamText({
model,
messages: request.body.messages,
system: "You are an expert TypeScript developer.",
maxTokens: 2000,
onFinish: ({ usage }) => {
fastify.log.info({ usage }, "stream finished");
},
});
reply.raw.setHeader("Content-Type", "text/plain; charset=utf-8");
for await (const chunk of result.textStream) {
reply.raw.write(chunk);
}
reply.raw.end();
});
See Also
- generate — High-level wrapper around
generateText - stream — High-level wrapper around
streamText - createEmbedding — High-level wrapper around
embed/embedMany
AI Context
package: "@xenterprises/fastify-x-ai"
methods: fastify.xai.getModel(provider?, model?) | fastify.xai.raw (object with AI SDK primitives)
use-when: getModel — get a raw AI SDK model instance for direct use with AI SDK functions | raw — access generateText, streamText, embed, embedMany directly
