fastify-x-ai
complete
Simple text completion — accepts a prompt string and returns the generated text directly.
complete
The simplest generation method. Takes a prompt string, delegates to generate(), and returns result.text directly — no result object to destructure.
Signature
fastify.xai.complete(prompt: string, options?: CompleteOptions): Promise<string>
interface CompleteOptions {
provider?: "openai" | "anthropic" | "google"
model?: string
maxTokens?: number
temperature?: number
system?: string
}
Params
| Name | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | The text prompt |
options.provider | string | No | Override default provider |
options.model | string | No | Override default model |
options.maxTokens | number | No | Override defaultMaxTokens |
options.temperature | number | No | Override defaultTemperature |
options.system | string | No | System message |
Returns
Promise<string> — the generated text.
Throws
| Error | When |
|---|---|
xAI complete: 'prompt' is required | prompt is empty, null, or undefined |
xAI: Provider '…' not configured | Specified or default provider has no API key |
Examples
Basic usage
const text = await fastify.xai.complete("Write a haiku about the ocean.");
console.log(text);
// "Waves crash endlessly / Salt and foam kiss the warm shore / Peace in every tide"
With provider and model override
fastify.post("/summarize", async (request, reply) => {
const { article } = request.body;
const summary = await fastify.xai.complete(
`Summarize the following article in three bullet points:\n\n${article}`,
{
provider: "anthropic",
model: "claude-sonnet-4-20250514",
maxTokens: 300,
},
);
return { summary };
});
See Also
- generate — What
complete()calls internally; use when you need token usage or tool results - chat — Multi-turn conversation with message history
AI Context
package: "@xenterprises/fastify-x-ai"
method: fastify.xai.complete(prompt, options?)
use-when: Simplest text generation — takes a prompt string and returns just the text string
params: prompt (required, string), model, provider, maxTokens, temperature
returns: string (text only)
