Nuda Kit uses the Vercel AI SDK as its AI foundation. This is the same SDK used by Vercel's v0, ChatGPT wrappers, and thousands of production AI applications.
Provider Agnostic
Streaming First
Type Safe
Edge Ready
The AI integration in Nuda Kit is split between frontend and backend:
┌─────────────────────────────────────────────────────────────┐
│ Frontend (Nuxt) │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ @ai-sdk/vue │ │
│ │ • Chat component with streaming │ │
│ │ • Message state management │ │
│ │ • Auto-scrolling & UI helpers │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────┬───────────────────────────┘
│ HTTP Stream
▼
┌─────────────────────────────────────────────────────────────┐
│ Backend (AdonisJS) │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ AI Service │ │
│ │ • Provider abstraction (OpenAI, Anthropic, Google) │ │
│ │ • Text generation & streaming │ │
│ │ • Model configuration │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Out of the box, Nuda Kit supports three major AI providers:
The most popular choice for AI applications, offering GPT-4 and beyond.
| Model | Best For |
|---|---|
| GPT-5 | Most capable, complex reasoning |
| GPT-4o | Fast, multimodal, great balance |
Known for Claude's strong reasoning and safety features.
| Model | Best For |
|---|---|
| Claude 4.5 Sonnet | Long context, nuanced responses |
Google's Gemini models with strong multimodal capabilities.
| Model | Best For |
|---|---|
| Gemini 2.5 Pro | Complex tasks, large context |
| Gemini 2.5 Flash | Fast responses, cost-effective |
Add your provider API keys to backend/.env:
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Google AI
GOOGLE_API_KEY=...
Generate complete text responses in a single request. Best for:
Stream responses token-by-token for real-time UI updates. Best for:
Nuda Kit includes stream smoothing that provides a more natural reading experience:
The backend includes a dedicated AiService class that provides:
| Feature | Description |
|---|---|
| Provider Abstraction | Automatically routes requests to the correct provider based on model name |
| Model Switching | Change models per-request or set a default |
| Generation Options | Configure temperature, top-p, frequency penalty, and more |
| Stream Handling | Built-in support for streaming with optional chunk callbacks |
When generating or streaming text, you can configure:
| Option | Description | Default |
|---|---|---|
modelName | Which model to use | gpt-4o |
temperature | Creativity (0-2) | Provider default |
topP | Nucleus sampling | Provider default |
frequencyPenalty | Reduce repetition | Provider default |
presencePenalty | Encourage new topics | Provider default |
The frontend uses @ai-sdk/vue for seamless Vue integration:
The AI streaming endpoint:
| Method | Endpoint | Auth | Description |
|---|---|---|---|
POST | /ai/stream | Required | Stream AI responses |
Request Body:
messages — Array of conversation messagesmodel — Model name (e.g., gpt-4o, claude-4.5-sonnet)The AI service is designed to be extended. Common customizations:
AI APIs can fail for various reasons. Always handle:
AI API calls can be expensive. Consider:
Protect your AI implementation: