Configuration
Configure the CLI with nudge.config.json.
Config File
Create nudge.config.json in your project root, or run npx @nudge-ai/cli init to generate one interactively:
{
"ai": {
"provider": "openrouter",
"apiKeyEnvVar": "OPENROUTER_API_KEY",
"model": "anthropic/claude-sonnet-4"
}
}Options
| Option | Default | Description |
|---|---|---|
generatedFile | src/prompts.gen.ts | Output path for generated file |
promptFilenamePattern | **/*.prompt.{ts,js} | Glob pattern for prompt files |
ai.provider | — | "openai", "openrouter", or "local" |
ai.apiKeyEnvVar | — | Environment variable name (optional for "local") |
ai.model | — | Model identifier |
ai.baseUrl | — | Custom API base URL (required for "local") |
Providers
OpenRouter — openrouter.ai
{
"ai": {
"provider": "openrouter",
"apiKeyEnvVar": "OPENROUTER_API_KEY",
"model": "anthropic/claude-sonnet-4"
}
}OpenAI — platform.openai.com
{
"ai": {
"provider": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY",
"model": "gpt-4o"
}
}Local — For local models (llama.cpp, Ollama, etc.) with OpenAI-compatible API:
{
"ai": {
"provider": "local",
"baseUrl": "http://localhost:11434/v1",
"model": "llama2"
}
}CLI Flags
npx @nudge-ai/cli generate --force # Force regenerate all prompts