If you're building AI-powered applications with Node.js or TypeScript, the superprompts npm package lets you fetch and manage your system prompts at runtime instead of hardcoding them. This guide walks you through the setup from scratch.
Prerequisites
Before you start, you'll need:
- A SuperPrompts account (free tier works)
- A project with at least one prompt created in the SuperPrompts dashboard
- An API key for your project (found in Project Settings > API)
- Node.js 18+ installed
Installation
Install the package with your preferred package manager:
npm install superprompts
yarn add superprompts
pnpm add superprompts
Basic setup
Initialize the client with your project API key:
import { SuperPrompts } from 'superprompts';
const sp = new SuperPrompts({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
Always store your API key in an environment variable. Never commit it to your codebase.
Fetching a prompt
Each prompt in SuperPrompts has a unique ID visible in the dashboard. Use it to fetch the prompt content:
const prompt = await sp.getPrompt('your-prompt-id');
console.log(prompt.content);
// "You are a helpful customer support agent for Acme Corp..."
The content field contains the full prompt text, assembled from all sections you've defined in the dashboard.
Using with OpenAI
Here's a complete example using the OpenAI SDK:
import { SuperPrompts } from 'superprompts';
import OpenAI from 'openai';
const sp = new SuperPrompts({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
async function chat(userMessage: string) {
const prompt = await sp.getPrompt('customer-support-agent');
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: prompt.content },
{ role: 'user', content: userMessage }
]
});
return response.choices[0].message.content;
}
Using with the Vercel AI SDK
If you're using the Vercel AI SDK (which SuperPrompts itself is built on), the integration is just as straightforward:
import { SuperPrompts } from 'superprompts';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const sp = new SuperPrompts({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
async function generate(userMessage: string) {
const prompt = await sp.getPrompt('writing-assistant');
const { text } = await generateText({
model: openai('gpt-4'),
system: prompt.content,
prompt: userMessage
});
return text;
}
Using with Anthropic
Same pattern, different provider:
import { SuperPrompts } from 'superprompts';
import Anthropic from '@anthropic-ai/sdk';
const sp = new SuperPrompts({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY
});
async function chat(userMessage: string) {
const prompt = await sp.getPrompt('code-reviewer');
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
system: prompt.content,
messages: [
{ role: 'user', content: userMessage }
]
});
return response.content[0].text;
}
Using the REST API directly
If you're not using Node.js, or prefer raw HTTP, the REST API works with any language:
curl -X GET https://api.superprompts.app/v1/prompts/your-prompt-id \
-H "Authorization: Bearer YOUR_API_KEY"
Response:
{
"id": "your-prompt-id",
"name": "Customer Support Agent",
"content": "You are a helpful customer support agent...",
"version": 12,
"updated_at": "2026-04-08T10:30:00Z"
}
This works from Python, Go, Ruby, or any HTTP client.
Caching strategy
For production applications, you'll want to cache prompts to minimize API calls and handle service interruptions:
import { SuperPrompts } from 'superprompts';
const sp = new SuperPrompts({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
const cache = new Map<string, { content: string; fetchedAt: number }>();
const CACHE_TTL = 60_000; // 1 minute
async function getPromptCached(promptId: string): Promise<string> {
const cached = cache.get(promptId);
if (cached && Date.now() - cached.fetchedAt < CACHE_TTL) {
return cached.content;
}
try {
const prompt = await sp.getPrompt(promptId);
cache.set(promptId, {
content: prompt.content,
fetchedAt: Date.now()
});
return prompt.content;
} catch (error) {
// Fall back to cached version if available
if (cached) {
console.warn('Failed to fetch prompt, using cached version');
return cached.content;
}
throw error;
}
}
This gives you sub-millisecond prompt access for cached hits, automatic refresh every minute, and graceful fallback to stale cache if the API is unreachable.
Environment configuration
A clean setup uses different API keys per environment:
# .env.local (development)
SUPERPROMPTS_API_KEY=sp_dev_...
# .env.staging
SUPERPROMPTS_API_KEY=sp_staging_...
# .env.production
SUPERPROMPTS_API_KEY=sp_prod_...
Each environment can point to the same or different projects in SuperPrompts. This lets you test prompt changes in staging before they hit production.
Next steps
Once you're fetching prompts at runtime, you unlock the full SuperPrompts workflow:
- Edit prompts in the dashboard without touching code
- View version history and diff changes
- Roll back to any previous version instantly
- Run evaluations to test prompt quality before publishing
- Collaborate with your team using organization-level access
The key mindset shift is that prompts are no longer part of your codebase. They're a managed resource with their own lifecycle, version history, and deployment process. This decoupling is what lets you iterate on prompts at the speed they need, which is much faster than your code release cycle.
Create your free account at superprompts.app and have your first prompt managed via API in under 5 minutes.