@inteli.city/node-red-contrib-ai-collection 1.0.2
A Node-RED package for building AI-powered flows. Connect any message to OpenAI, Anthropic, Gemini, OpenRouter, or a local CLI tool — all from a single node.
node-red-contrib-ai-collection
A Node-RED package for building AI-powered flows. Connect any message to OpenAI, Anthropic, Gemini, OpenRouter, or a local CLI tool — all from a single node.
prompt.ai is the main node. It renders a Nunjucks template from the incoming message, sends the result to an AI provider, and writes the response back to any message property.
ai-provider is a config node that holds your provider credentials and model choice. One config node can be shared across multiple prompt.ai nodes, so you can switch models or providers in one place.
Table of Contents
- Install
- Quick Start
- Nodes
- Supported Providers
- How It Works
- Queue and Concurrency
- Timeout
- Template System (Nunjucks)
- Examples
- Common Patterns
- Limitations
Install
cd ~/.node-red
npm install @inteli.city/node-red-contrib-ai-collection
Quick Start
- Drag an ai-provider config node into your flow (or create one from inside a prompt.ai node).
- Select a provider (e.g.
openai), enter your API key, and pick a model (e.g.gpt-4.1-mini). - Drag a prompt.ai node, open it, and select your ai-provider.
- Write a template:
{{ payload }} - Connect an inject node → prompt.ai → debug node.
- Deploy and inject a message.
The inject payload becomes the prompt. The AI response is written to msg.payload and passed to the next node.
Nodes
prompt.ai
The main node. Every incoming message goes through three steps: template rendering, AI call, output writing.
| Field | Description |
|---|---|
| Provider | Which ai-provider config node to use |
| System Prompt | Optional instruction sent to the model before every request (e.g. "You are a helpful assistant") |
| Queue | Max number of requests running in parallel. Extra messages are buffered and processed in order. Default: 1 |
| Temperature | Controls randomness. 0 = deterministic, 2 = very creative. Default: 1 |
| Timeout (s) | Seconds before a request is cancelled. 0 = no timeout. Default: 60 |
| Template | Nunjucks template — the rendered output becomes the user prompt |
| Output | How to parse the AI response: Plain text, Parsed JSON, Parsed YAML, or Parsed XML |
| Property | Where to write the result — any msg, flow, or global property |
Queue behavior: if Queue is set to 1, requests are processed one at a time — safe for rate-limited APIs. If set to 3, up to three AI calls run in parallel and the rest wait. Each prompt.ai node has its own independent queue.
Timeout behavior: if the AI call takes longer than the configured seconds, the request fails with a timeout error. The queue continues processing the next message. Setting timeout to 0 disables it entirely.
ai-provider
A config node that stores provider credentials. Create one per provider/model combination and reuse it across as many prompt.ai nodes as you need.
| Field | Description |
|---|---|
| Name | Optional label shown in the node status |
| Provider | One of: openai, anthropic, gemini, openrouter, system |
| Model | Model name (e.g. gpt-4.1-mini, claude-haiku-4-5-20251001) |
| API Key | Stored as a credential (never exposed in the flow JSON) |
| Command | CLI binary name — only for system provider |
| Args | Optional CLI flags — only for system provider |
Supported Providers
OpenAI
Standard chat completions API. Supports all GPT and o-series models.
- Get your API key at platform.openai.com
- Example models:
gpt-4.1-mini,gpt-4.1,o4-mini
Anthropic
Claude model family via the Messages API.
- Get your API key at console.anthropic.com
- Example models:
claude-haiku-4-5-20251001,claude-sonnet-4-6,claude-opus-4-6
Gemini
Google Gemini via the @google/genai SDK.
- Get your API key at aistudio.google.com
- Example models:
gemini-2.0-flash,gemini-1.5-pro
OpenRouter
Access many providers through a single API endpoint.
- Get your API key at openrouter.ai
- Model format:
provider/model— e.g.openai/gpt-4o,anthropic/claude-3.5-sonnet
System (CLI)
Runs a local command-line tool. The rendered prompt is sent via stdin; the response is read from stdout.
No API key needed. The CLI binary must be installed and available in PATH.
| Command | Args | Tool |
|---|---|---|
claude |
-p |
Claude Code CLI |
gemini |
Gemini CLI | |
codex |
exec |
OpenAI Codex CLI |
gh |
copilot suggest |
GitHub Copilot CLI |
How It Works
Each message follows this pipeline:
- Message arrives —
msgenters the prompt.ai node - Template renders — Nunjucks renders the template using
msgproperties - Queue — request waits if concurrency limit is reached
- AI call — rendered prompt is sent to the configured provider
- Parse — response is optionally parsed (JSON / YAML / XML)
- Output — result is written to the configured property and
msgis forwarded
Queue and Concurrency
Each prompt.ai node maintains its own queue. The Queue field sets the maximum number of concurrent AI calls for that node.
Queue = 2, 5 messages arrive simultaneously:
→ 2 start immediately
→ 3 wait
→ as each finishes, the next starts
Use Queue = 1 when your API plan has strict rate limits or when order matters. Use higher values to maximize throughput on fast APIs.
The node status shows live queue state: waiting (executing/limit) provider-name.
Timeout
Timeout is set in seconds on the prompt.ai node (default: 60). Set to 0 to disable.
When a request exceeds the timeout:
- The request fails with a timeout error
- The error is sent to Node-RED's error handler (catch node)
- The queue continues processing the next message
- Note: the underlying provider process or HTTP request is not guaranteed to be cancelled — timeout prevents the flow from stalling, but does not force-terminate the provider
Template System (Nunjucks)
Templates use Nunjucks syntax. Message properties are available directly at the root — do not use msg. prefix.
| Variable | Value |
|---|---|
{{ payload }} |
msg.payload |
{{ topic }} |
msg.topic |
{{ flow.get("key") }} |
Flow context variable |
{{ global.get("key") }} |
Global context variable |
{{ env.MY_VAR }} |
OS environment variable |
Objects and arrays in msg.payload are automatically JSON-stringified before rendering.
Quick reference:
{{ payload }} output a variable
{{ payload | upper }} apply a filter
{% if payload %}...{% endif %} conditional
{% for item in items %}...{% endfor %} loop
{% set x = 42 %} assign a variable
{{ flow.get("config") }} read flow context
{{ env.LANGUAGE }} read environment variable
Examples
Example 1 — Simple prompt
Translate the following text to Spanish:
{{ payload }}
Inject msg.payload = "Hello, world!" → response is the Spanish translation.
Example 2 — System prompt + structured input
System Prompt: You are a sentiment classifier. Reply with only: positive, negative, or neutral.
Template:
{{ payload }}
Inject any text → response is a single classification word.
Example 3 — Structured JSON output
System Prompt: You are a JSON generator. Always reply with valid JSON only, no explanation.
Template:
Extract the name and city from this text and return JSON:
{{ payload }}
Set Output to Parsed JSON → msg.payload is a JavaScript object, ready for the next node.
Common Patterns
Reuse one provider across nodes Create a single ai-provider config and select it in multiple prompt.ai nodes. Change the model once to affect all of them.
Test multiple models side by side Create two ai-provider configs with different models. Wire the same inject to two prompt.ai nodes each pointing to a different config.
Rate-limited APIs Set Queue = 1. Requests are processed strictly one at a time with no parallel calls.
Local / offline AI
Use the system provider with a CLI tool like claude or gemini. No API key or internet required.
Dynamic templates
If msg.template is set, it overrides the template defined in the node editor — useful for runtime-generated prompts.
System Prompt Behavior
The System Prompt is applied differently depending on the provider type:
| Provider | Behavior |
|---|---|
| OpenAI, Anthropic, Gemini, OpenRouter | Sent as a structured system instruction via the provider API |
| System (CLI) | Prepended to the prompt text before stdin is written |
CLI example:
System Prompt: You are a strict JSON generator.
Template: Extract fields from: {{ payload }}
What the CLI receives:
You are a strict JSON generator.
Extract fields from: ...
Limitations
- No streaming — responses are returned only when complete
- No retries — failed requests are not automatically retried
- No template includes — templates are single self-contained strings;
{% include %},{% extends %}, and{% import %}are not supported - No
msg.prefix — use{{ payload }}, not{{ msg.payload }} - CLI timeout is soft — the process is not killed when the prompt.ai timeout fires; it will eventually exit on its own