@inductiv/node-red-openai-api 6.27.0

Enhance your Node-RED projects with advanced AI capabilities.

npm install @inductiv/node-red-openai-api

@inductiv/node-red-openai-api

NPM Version GitHub Issues GitHub Stars

This project brings the OpenAI API, and OpenAI-compatible APIs, into Node-RED as workflow-native building blocks.

It is not just a thin wrapper around text generation. The node exposes modern AI capabilities inside a runtime people can inspect, route, test, and operate: request and response workflows, tools, conversations, streaming, realtime interactions, webhooks, and related API families that matter in real systems.

That makes this repository relevant beyond Node-RED alone. It is a practical implementation of how contemporary AI capabilities can live inside an open workflow environment instead of being locked inside a single vendor surface or hidden behind a one-purpose abstraction.

This package currently targets the openai Node SDK ^6.27.0.

Why This Exists

Modern AI work is no longer just "send a prompt, get a string back."

Real systems now involve:

  • tool use
  • multi-step workflows
  • structured payloads
  • streaming responses
  • realtime sessions
  • webhook verification
  • provider compatibility and auth routing

Node-RED is already good at orchestration, automation, event handling, integration, and operational clarity. This project connects those strengths to the OpenAI API surface so teams can build AI workflows in an environment that stays visible and composable.

Core Model

The node model in this repository is intentionally simple:

  • one OpenAI API node handles the runtime method call
  • one Service Host config node handles API base URL, auth, and organization settings
  • the selected method determines which OpenAI API context is being called
  • request data is passed in through a configurable message property, msg.payload by default
  • method-specific details live in the editor help, example flows, and the underlying SDK contract

In practice, that means one node can cover a wide API surface without turning the flow itself into a maze of special-purpose nodes.

What It Enables

Request and Response Workflows

Use the node for direct generation, structured Responses API work, chat-style interactions, moderation, embeddings, image work, audio tasks, and other request/response patterns.

Tool-Enabled and Multi-Step AI Flows

Use Responses tools, conversations, runs, messages, vector stores, files, skills, and related resources as part of larger control loops and operational workflows.

Streaming and Realtime Work

Use streamed Responses output, Realtime client-secret creation, SIP call operations, and persistent Responses websocket connections where a flow needs more than one-shot request handling.

Event-Driven Integrations

Use webhook signature verification and payload unwrapping in Node-RED flows that react to upstream platform events.

OpenAI-Compatible Provider Support

Use the Service Host config to target compatible API providers with custom base URLs, custom auth header names, query-string auth routing, and typed configuration values.

Requirements

  • Node.js >=18.0.0
  • Node-RED >=3.0.0

Install

Node-RED Palette Manager

@inductiv/node-red-openai-api

npm

cd $HOME/.node-red
npm i @inductiv/node-red-openai-api

Quick Start

  1. Drop an OpenAI API node onto your flow.
  2. Create or select a Service Host config node.
  3. Set API Base to your provider endpoint. The default OpenAI value is https://api.openai.com/v1.
  4. Set API Key using either:
    • cred for a stored credential value, or
    • env, msg, flow, or global for a runtime reference
  5. Pick a method on the OpenAI API node, such as create model response.
  6. Send the request payload through msg.payload, or change the node's input property if your flow uses a different message shape.

Example msg.payload for create model response:

{
  "model": "gpt-5-nano",
  "input": "Write a one-line status summary."
}

The node writes its output back to msg.payload.

Start Here

If you want to understand the shape of this node quickly, these example flows are the best entry points:

Current Alignment Highlights

This repository currently includes:

  • Responses API support, including phase, prompt_cache_key, tool_search, GA computer-use payloads, cancellation, compaction, input-token counting, and websocket mode
  • Realtime API support, including client-secret creation, SIP call operations, and current SDK-typed model ids such as gpt-realtime-1.5 and gpt-audio-1.5
  • Conversations, Containers, Container Files, Evals, Skills, Videos, and Webhooks support
  • OpenAI-compatible auth routing through the Service Host config node

See the in-editor node help for exact method payloads and links to official API documentation.

API Surface

The method picker covers a wide range of OpenAI API families:

  • Assistants
  • Audio
  • Batch
  • Chat Completions
  • Container Files
  • Containers
  • Conversations
  • Embeddings
  • Evals
  • Files
  • Fine-tuning
  • Images
  • Messages
  • Models
  • Moderations
  • Realtime
  • Responses
  • Runs
  • Skills
  • Threads
  • Uploads
  • Vector Store File Batches
  • Vector Store Files
  • Vector Stores
  • Videos
  • Webhooks

Graders are supported through Evals payloads via testing_criteria, in the same way the official SDK models them.

Example Index

Import-ready example flows live under examples/:

Service Host Notes

The Service Host config node handles the provider-specific runtime boundary.

  • API Key supports cred, env, msg, flow, and global
  • API Base can point at OpenAI or a compatible provider
  • Auth Header defaults to Authorization, but can be changed for provider-specific auth conventions
  • auth can be sent either as a header or as a query-string parameter
  • Organization ID is optional and supports typed values like the other service fields

This is the piece that lets one runtime model work cleanly across both OpenAI and compatible API surfaces.

Repository Shape

This repository is structured so the runtime, editor, examples, and generated artifacts stay understandable:

  • node.js Node-RED runtime entry point and Service Host config-node logic.
  • src/ Source modules for method implementations, editor templates, and help content.
  • src/lib.js Source entry for the bundled runtime method surface.
  • lib.js Generated runtime bundle built from src/lib.js.
  • src/node.html Source editor template that includes the per-family fragments.
  • node.html Generated editor asset built from src/node.html.
  • examples/ Import-ready Node-RED flows.
  • test/ Node test coverage for editor behavior, auth routing, method mapping, and websocket lifecycle behavior.

Development

npm install
npm run build
npm test

Generated files are part of the project:

  • node.html is built from src/node.html
  • lib.js is built from src/lib.js

If you change source templates or runtime source files, rebuild before review or release.

Contributing

Contributions are welcome. Keep changes clear, intentional, and proven.

Please include:

  • a clear scope and rationale
  • tests for behavior changes
  • a short plain-language comment block at the top of each test file you add or touch
  • doc updates when user-facing behavior changes

License

MIT

Node Info

Version: 6.27.0
Updated 1 day ago
License: MIT
Rating: 4.1 9

Categories

Actions

Rate:

Downloads

1047 in the last week

Nodes

  • OpenAI API
  • Service Host

Keywords

  • node-red
  • aiot
  • openai
  • gpt-4o-audio-preview
  • ai
  • openai api
  • iot
  • node-red-contrib
  • automation
  • smart devices
  • node-red node
  • artificial intelligence
  • api integration
  • ai agent
  • low-code

Maintainers