@inteli.city/node-red-contrib-ai-collection 1.1.0

A Node-RED package for building AI-powered flows. Connect any message to OpenAI, Anthropic, Gemini, OpenRouter, or a local CLI tool — all from a single node.

npm install @inteli.city/node-red-contrib-ai-collection

node-red-contrib-ai-collection

A Node-RED package for building AI-powered flows. Connect any message to OpenAI, Anthropic, Gemini, OpenRouter, or a local CLI tool — all from a single node.

prompt.ai renders a Nunjucks template from the incoming message, sends it to a text AI provider, and writes the response back to any message property. Optionally calls tools from an MCP server before returning the final answer.

image.ai renders a Nunjucks template and sends it to an image generation provider, returning a URL or base64 data URI.

Each node uses a dedicated config node (ai-provider for text, image-provider for images) that holds credentials and model selection. One config node can be shared across multiple nodes.


Table of Contents


Install

cd ~/.node-red
npm install @inteli.city/node-red-contrib-ai-collection

Quick Start

Text generation

  1. Drag an ai-provider config node into your flow (or create one from inside a prompt.ai node).
  2. Select a provider (e.g. openai), enter your API key, and pick a model (e.g. gpt-4.1-mini).
  3. Drag a prompt.ai node, open it, and select your ai-provider.
  4. Write a template:
    {{ payload }}
    
  5. Connect an inject node → prompt.ai → debug node.
  6. Deploy and inject a message.

The inject payload becomes the prompt. The AI response is written to msg.payload.

Image generation

  1. Drag an image-provider config node, select a provider (e.g. openai), enter your API key, and pick a model (e.g. gpt-image-1).
  2. Drag an image.ai node, open it, and select your image-provider.
  3. Write a prompt template:
    {{ payload }}
    
  4. Connect an inject node → image.ai → debug node.
  5. Deploy and inject a message like "A futuristic city at sunset".

The generated image is written to msg.payload as a URL or base64 data URI.


Nodes

prompt.ai

The main text generation node. Every incoming message goes through three steps: template rendering, AI call, output writing.

Field Description
Provider Which ai-provider config node to use
System Prompt Optional instruction sent to the model before every request
Queue Max concurrent AI calls. Extra messages are buffered in order. Default: 1
Temperature Controls randomness. 0 = deterministic, 2 = very creative. Default: 1
Timeout (s) Seconds before a request is cancelled. 0 = no timeout. Default: 60
Template Nunjucks template — rendered output becomes the user prompt
Output How to parse the AI response: Plain text, Parsed JSON, Parsed YAML, or Parsed XML
Property Where to write the result — any msg, flow, or global property

image.ai

Image generation node. Renders a Nunjucks prompt template and calls the configured image provider.

Field Description
Provider Which image-provider config node to use
Queue Max concurrent image generation calls. Default: 1
Timeout (s) Seconds before a request is cancelled. 0 = no timeout. Default: 0
Image Size Output dimensions. Autocomplete shows only sizes valid for the selected provider and model. Default: 1024x1024
Images Number of images to generate per request. Default: 1
Prompt Nunjucks template — rendered output is the image description

Output:

msg.payload  // first image — URL or data:image/...;base64,... URI
msg.images   // array of all images (same format)
msg.raw      // raw provider response

Send msg.stop = true to cancel all running and queued requests on the node.


ai-provider

Config node for text generation. Stores credentials for a single provider/model combination. Reuse across multiple prompt.ai nodes.

Primary settings:

Field Description
Name Optional label shown in the node status
Provider One of: openai, anthropic, gemini, openrouter, system
Model Model name (e.g. gpt-4.1-mini, claude-haiku-4-5-20251001)
API Key Stored as a credential — never exposed in the flow JSON
Command CLI binary name — only for system provider
Args Optional CLI flags — only for system provider

Advanced — MCP Tools (optional, disabled by default):

Enables calling tools from an MCP server during each request. Supported for OpenAI, Anthropic, Gemini, and OpenRouter. Not available for the System (CLI) provider.

Field Description
MCP server command Command used to start the MCP server (e.g. npx -y @example/mcp-server)
MCP server args Optional extra arguments passed to the server command
Allowed tools Comma-separated list of tool names to expose. Leave empty to allow all tools the server offers
MCP tool usage optional — model may or may not call a tool; required — a tool call is expected
Timeout (s) Maximum seconds allowed for each MCP operation (connection + tool execution). Default: 30
If MCP fails fallback — continue without the tool result; fail — surface the error to the flow

image-provider

Config node for image generation. Stores credentials for an image provider. Only providers that support image generation are available — it is not possible to accidentally select a text-only provider like Anthropic.

Field Description
Name Optional label
Provider One of: openai, stability, replicate, gemini
Model Model name (e.g. gpt-image-1, gemini-2.5-flash-image)
API Key Stored as a credential — never exposed in the flow JSON

Supported Providers

Text providers

OpenAI

Standard chat completions API. Supports all GPT and o-series models.

Anthropic

Claude model family via the Messages API.

  • API key: console.anthropic.com
  • Example models: claude-haiku-4-5-20251001, claude-sonnet-4-6, claude-opus-4-6

Gemini

Google Gemini via the @google/genai SDK.

OpenRouter

Access many providers through a single API endpoint.

  • API key: openrouter.ai
  • Model format: provider/model — e.g. openai/gpt-4o, anthropic/claude-3.5-sonnet

System (CLI)

Runs a local command-line tool. The rendered prompt is sent via stdin; the response is read from stdout. No API key needed.

Command Args Tool
claude -p Claude Code CLI
gemini Gemini CLI
codex exec OpenAI Codex CLI
gh copilot suggest GitHub Copilot CLI

Image providers

OpenAI

Uses the Images API (client.images.generate).

Model Valid sizes
gpt-image-1 1024x1024, 1536x1024, 1024x1536
dall-e-3 1024x1024, 1792x1024, 1024x1792
dall-e-2 256x256, 512x512, 1024x1024

Returns URLs or base64 data URIs depending on the model.

Stability AI

Uses the REST API v1 (/v1/generation/{engine}/text-to-image).

Model Valid sizes
stable-diffusion-xl-1024-v1-0 1024x1024, 1152x896, 896x1152, 1216x832, 832x1216, 1344x768, 768x1344, 1536x640, 640x1536
stable-diffusion-v1-6 512x512, 768x768, 1024x1024

Returns base64 data URIs.

Replicate

Uses the Predictions API. Supports any text-to-image model hosted on Replicate.

  • API key: replicate.com
  • Set Model to owner/name or owner/name:version
  • Common sizes: 512x512, 768x768, 1024x1024 (exact support depends on the model)

Returns image URLs.

Gemini

Uses generateContent with responseModalities: ["IMAGE"] and imageConfig.

Model Notes
gemini-2.5-flash-image Stable, recommended default
gemini-3-pro-image-preview Higher quality, preview
gemini-3.1-flash-image-preview Fast, preview
  • Valid sizes: 1024x1024, 1024x768, 768x1024, 1024x576, 576x1024, 2048x2048, 1920x1080, 1080x1920
  • Size is converted to the nearest supported imageSize class (1K/2K/4K) and aspectRatio automatically.

Returns base64 data URIs.


How It Works

Text (prompt.ai) — without MCP

Each message follows this pipeline:

  1. Message arrivesmsg enters the prompt.ai node
  2. Template renders — Nunjucks renders the template using msg properties
  3. Queue — request waits if concurrency limit is reached
  4. AI call — rendered prompt is sent to the configured provider
  5. Parse — response is optionally parsed (JSON / YAML / XML)
  6. Output — result is written to the configured property and msg is forwarded

Text (prompt.ai) — with MCP enabled

When MCP is enabled on the ai-provider config node, two additional steps run inside the AI call:

  1. Template renders — same as above
  2. Queue — same as above
  3. MCP connect — a session is opened with the MCP server
  4. First LLM call — prompt is sent with available tool definitions
  5. No tool call → response returned normally Tool call requested
    • Tool is validated against the allowed list
    • Tool is executed via the MCP server
    • Second LLM call — original prompt + tool result sent to the model
  6. Parse and output — same as above; msg.raw.mcp contains execution metadata

The MCP session is opened and closed per request — no state is shared between concurrent messages.

Image (image.ai)

  1. Message arrivesmsg enters the image.ai node
  2. Template renders — Nunjucks renders the prompt template
  3. Queue — request waits if concurrency limit is reached
  4. Provider call — rendered prompt + size + n are sent to the provider
  5. Normalize — images are normalized to URL or base64 data URI
  6. Outputmsg.payload, msg.images, and msg.raw are set and forwarded

MCP Tools

MCP (Model Context Protocol) lets a prompt.ai node call tools from an external MCP server during a single request. This is configured on the ai-provider config node under Advanced.

How it works

Each request that reaches the AI call goes through this flow:

Rendered prompt
    ↓
First LLM call (with tool definitions from MCP server)
    ↓
Model returns text only → final response
Model requests a tool  → execute tool → second LLM call → final response

The system executes at most one tool per request. There are no loops, no chaining, and no multi-step reasoning — the flow is always linear and predictable.

Supported providers

MCP tool calling is supported for: OpenAI, Anthropic, Gemini, OpenRouter.

It is not available for the System (CLI) provider.

MCP server connection

The MCP server is started via a stdio transport — the node spawns the command you provide and communicates over stdin/stdout. The server is started and stopped per request.

MCP server command:  npx -y @modelcontextprotocol/server-filesystem
MCP server args:     /tmp

Any MCP server that implements the stdio transport is compatible.

Allowed tools

By default all tools the server exposes are made available to the model. To restrict which tools are offered, set Allowed tools to a comma-separated list:

read_file, list_directory

The model can only call tools in this list. Attempts to call unlisted tools are blocked.

Tool usage mode

Mode Behaviour
optional The model may or may not call a tool. Works for most use cases.
required A tool call is expected. If the model returns text only, the behaviour depends on If MCP fails.

Error handling

Setting Behaviour
fallback If anything fails (connection, tool execution, timeout), the node continues without the tool and returns the first LLM response.
fail Any failure surfaces as a Node-RED error, catchable with a catch node.

Observability — msg.raw.mcp

Every request that goes through the MCP path sets msg.raw.mcp with structured metadata:

msg.raw.mcp = {
    mcpEnabled:    true,
    toolCalled:    true,          // whether a tool was actually called
    toolName:      "read_file",   // tool name (null if no call)
    toolArguments: { path: "/tmp/data.txt" },
    toolResult:    "file contents...",
    toolError:     null,          // error message if tool failed
    usedToolData:  true,          // whether the second LLM call was made
    timing: {
        connect:       42,        // ms to connect to MCP server
        firstCall:     310,       // ms for first LLM call
        toolExecution: 18,        // ms to execute the tool
        secondCall:    280,       // ms for second LLM call
        total:         650        // ms end-to-end
    }
}

Use this for debugging, logging, or conditional downstream logic.

Constraints

  • Maximum one tool call per request — no loops, no chaining
  • MCP session is stateless — no memory between requests
  • No shared state between concurrent requests running on the same node

Queue and Concurrency

Each node maintains its own queue. The Queue field sets the maximum number of concurrent AI calls for that node.

Queue = 2, 5 messages arrive simultaneously:
→ 2 start immediately
→ 3 wait
→ as each finishes, the next starts

Use Queue = 1 when your API plan has strict rate limits or when order matters. Use higher values to maximize throughput on fast APIs.

The node status shows live queue state: waiting (executing/limit) provider-name.


Timeout

Timeout is set in seconds on the node (default for prompt.ai: 60, default for image.ai: 0). Set to 0 to disable.

When a request exceeds the timeout:

  • The request fails with a timeout error
  • The error is sent to Node-RED's error handler (catch node)
  • The queue continues processing the next message
  • Note: the underlying HTTP request is not force-terminated — timeout prevents the flow from stalling, but the provider may still complete the request on its end

When MCP is enabled, the MCP Timeout field on the ai-provider config controls the per-operation timeout (connection and tool execution). The node-level timeout still governs the entire request end-to-end.


Template System (Nunjucks)

Templates use Nunjucks syntax. Message properties are available directly at the root — do not use msg. prefix.

Variable Value
{{ payload }} msg.payload
{{ topic }} msg.topic
{{ flow.get("key") }} Flow context variable
{{ global.get("key") }} Global context variable
{{ env.MY_VAR }} OS environment variable

Objects and arrays in msg.payload are automatically JSON-stringified before rendering.

Quick reference:

{{ payload }}                          output a variable
{{ payload | upper }}                  apply a filter
{% if payload %}...{% endif %}         conditional
{% for item in items %}...{% endfor %} loop
{% set x = 42 %}                       assign a variable
{{ flow.get("config") }}               read flow context
{{ env.LANGUAGE }}                     read environment variable

Examples

Example 1 — Simple text prompt

Translate the following text to Spanish:

{{ payload }}

Inject msg.payload = "Hello, world!" → response is the Spanish translation.


Example 2 — Structured JSON output

System Prompt: You are a JSON generator. Always reply with valid JSON only, no explanation.

Template:

Extract the name and city from this text and return JSON:

{{ payload }}

Set Output to Parsed JSONmsg.payload is a JavaScript object, ready for the next node.


Example 3 — Image from a text prompt

image-provider: openai, model: gpt-image-1

Prompt template:

{{ payload }}

Inject msg.payload = "A futuristic city at sunset"msg.payload is a base64 data URI of the generated image.


Example 4 — Dynamic image prompt with context

A {{ style }} portrait of {{ subject }}, professional lighting, high detail

Inject:

{ "style": "watercolor", "subject": "an astronaut" }

→ Generates a watercolor portrait of an astronaut.


Example 5 — MCP tool call

ai-provider: openai, model: gpt-4.1-mini MCP server command: npx -y @modelcontextprotocol/server-filesystem MCP server args: /tmp

System Prompt: You are a helpful assistant with access to the local filesystem.

Template:

{{ payload }}

Inject msg.payload = "What files are in /tmp?" → the model calls list_directory, receives the result, and returns a natural-language answer.

msg.raw.mcp.toolName"list_directory" msg.raw.mcp.usedToolDatatrue


Common Patterns

Reuse one provider across nodes Create a single config node and select it in multiple nodes. Change the model once to affect all of them.

Test multiple models side by side Create two config nodes with different models. Wire the same inject to two nodes each pointing to a different config.

Rate-limited APIs Set Queue = 1. Requests are processed strictly one at a time with no parallel calls.

Local / offline AI Use the system provider in ai-provider with a CLI tool like claude or gemini. No API key or internet required.

Dynamic templates If msg.template is set at runtime, it overrides the template defined in the node editor.

Stop all in-flight image requests Send msg.stop = true to an image.ai node to cancel all running and queued requests immediately.

Restrict MCP tools per provider Configure two ai-provider nodes pointing to the same MCP server but with different Allowed tools lists. Use one for read-only tool access and another for write access.

Debug MCP execution Connect a debug node and inspect msg.raw.mcp to see which tool was called, its arguments, the result, and timing for every request.


System Prompt Behavior

The System Prompt field (prompt.ai only) is applied differently depending on the provider:

Provider Behavior
OpenAI, Anthropic, Gemini, OpenRouter Sent as a structured system instruction via the provider API
System (CLI) Prepended to the prompt text before stdin is written

Limitations

  • No streaming — responses are returned only when complete
  • No retries — failed requests are not automatically retried
  • No template includes{% include %}, {% extends %}, and {% import %} are not supported
  • No msg. prefix — use {{ payload }}, not {{ msg.payload }}
  • CLI timeout is soft — the process is not killed when the timeout fires; it will eventually exit on its own
  • Replicate image sizes are model-dependent — the node passes width/height but the model may ignore or reject unsupported values
  • MCP: one tool per request — the system calls at most one tool per message; tool chaining and multi-step reasoning are not supported
  • MCP: stdio transport only — HTTP/SSE MCP transports are not currently supported
  • MCP: not available for System (CLI) provider — CLI providers do not support structured tool calling

Node Info

Version: 1.1.0
Updated 2 weeks, 5 days ago
License: Apache-2.0
Rating: 5.0 1

Categories

Actions

Rate:

Downloads

30 in the last week

Nodes

  • prompt-ai
  • image-ai
  • ai-provider
  • image-provider

Keywords

  • node-red