@inductiv/node-red-openai-api 6.27.0
Enhance your Node-RED projects with advanced AI capabilities.
@inductiv/node-red-openai-api
This project brings the OpenAI API, and OpenAI-compatible APIs, into Node-RED as workflow-native building blocks.
It is not just a thin wrapper around text generation. The node exposes modern AI capabilities inside a runtime people can inspect, route, test, and operate: request and response workflows, tools, conversations, streaming, realtime interactions, webhooks, and related API families that matter in real systems.
That makes this repository relevant beyond Node-RED alone. It is a practical implementation of how contemporary AI capabilities can live inside an open workflow environment instead of being locked inside a single vendor surface or hidden behind a one-purpose abstraction.
This package currently targets the openai Node SDK ^6.27.0.
Why This Exists
Modern AI work is no longer just "send a prompt, get a string back."
Real systems now involve:
- tool use
- multi-step workflows
- structured payloads
- streaming responses
- realtime sessions
- webhook verification
- provider compatibility and auth routing
Node-RED is already good at orchestration, automation, event handling, integration, and operational clarity. This project connects those strengths to the OpenAI API surface so teams can build AI workflows in an environment that stays visible and composable.
Core Model
The node model in this repository is intentionally simple:
- one
OpenAI APInode handles the runtime method call - one
Service Hostconfig node handles API base URL, auth, and organization settings - the selected method determines which OpenAI API context is being called
- request data is passed in through a configurable message property,
msg.payloadby default - method-specific details live in the editor help, example flows, and the underlying SDK contract
In practice, that means one node can cover a wide API surface without turning the flow itself into a maze of special-purpose nodes.
What It Enables
Request and Response Workflows
Use the node for direct generation, structured Responses API work, chat-style interactions, moderation, embeddings, image work, audio tasks, and other request/response patterns.
Tool-Enabled and Multi-Step AI Flows
Use Responses tools, conversations, runs, messages, vector stores, files, skills, and related resources as part of larger control loops and operational workflows.
Streaming and Realtime Work
Use streamed Responses output, Realtime client-secret creation, SIP call operations, and persistent Responses websocket connections where a flow needs more than one-shot request handling.
Event-Driven Integrations
Use webhook signature verification and payload unwrapping in Node-RED flows that react to upstream platform events.
OpenAI-Compatible Provider Support
Use the Service Host config to target compatible API providers with custom base URLs, custom auth header names, query-string auth routing, and typed configuration values.
Requirements
- Node.js
>=18.0.0 - Node-RED
>=3.0.0
Install
Node-RED Palette Manager
@inductiv/node-red-openai-api
npm
cd $HOME/.node-red
npm i @inductiv/node-red-openai-api
Quick Start
- Drop an
OpenAI APInode onto your flow. - Create or select a
Service Hostconfig node. - Set
API Baseto your provider endpoint. The default OpenAI value ishttps://api.openai.com/v1. - Set
API Keyusing either:credfor a stored credential value, orenv,msg,flow, orglobalfor a runtime reference
- Pick a method on the
OpenAI APInode, such ascreate model response. - Send the request payload through
msg.payload, or change the node's input property if your flow uses a different message shape.
Example msg.payload for create model response:
{
"model": "gpt-5-nano",
"input": "Write a one-line status summary."
}
The node writes its output back to msg.payload.
Start Here
If you want to understand the shape of this node quickly, these example flows are the best entry points:
examples/chat.jsonA straightforward API-call flow for getting oriented.examples/responses/phase.jsonA clean Responses example using newer payload features.examples/responses/tool-search.jsonShows tool-enabled Responses work in a practical flow.examples/responses/computer-use.jsonShows the request and follow-up contract for computer-use style workflows.examples/responses/websocket.jsonShows explicit websocket lifecycle handling in one node instance.examples/realtime/client-secrets.jsonShows the Realtime client-secret contract for browser or mobile handoff.
Current Alignment Highlights
This repository currently includes:
- Responses API support, including
phase,prompt_cache_key,tool_search, GA computer-use payloads, cancellation, compaction, input-token counting, and websocket mode - Realtime API support, including client-secret creation, SIP call operations, and current SDK-typed model ids such as
gpt-realtime-1.5andgpt-audio-1.5 - Conversations, Containers, Container Files, Evals, Skills, Videos, and Webhooks support
- OpenAI-compatible auth routing through the
Service Hostconfig node
See the in-editor node help for exact method payloads and links to official API documentation.
API Surface
The method picker covers a wide range of OpenAI API families:
- Assistants
- Audio
- Batch
- Chat Completions
- Container Files
- Containers
- Conversations
- Embeddings
- Evals
- Files
- Fine-tuning
- Images
- Messages
- Models
- Moderations
- Realtime
- Responses
- Runs
- Skills
- Threads
- Uploads
- Vector Store File Batches
- Vector Store Files
- Vector Stores
- Videos
- Webhooks
Graders are supported through Evals payloads via testing_criteria, in the same way the official SDK models them.
Example Index
Import-ready example flows live under examples/:
examples/assistants.jsonexamples/audio.jsonexamples/chat.jsonexamples/embeddings.jsonexamples/files.jsonexamples/fine-tuning.jsonexamples/images.jsonexamples/messages.jsonexamples/models.jsonexamples/moderations.jsonexamples/realtime/client-secrets.jsonexamples/responses/computer-use.jsonexamples/responses/mcp.jsonexamples/responses/phase.jsonexamples/responses/tool-search.jsonexamples/responses/websocket.jsonexamples/runs.jsonexamples/threads.json
Service Host Notes
The Service Host config node handles the provider-specific runtime boundary.
API Keysupportscred,env,msg,flow, andglobalAPI Basecan point at OpenAI or a compatible providerAuth Headerdefaults toAuthorization, but can be changed for provider-specific auth conventions- auth can be sent either as a header or as a query-string parameter
Organization IDis optional and supports typed values like the other service fields
This is the piece that lets one runtime model work cleanly across both OpenAI and compatible API surfaces.
Repository Shape
This repository is structured so the runtime, editor, examples, and generated artifacts stay understandable:
node.jsNode-RED runtime entry point andService Hostconfig-node logic.src/Source modules for method implementations, editor templates, and help content.src/lib.jsSource entry for the bundled runtime method surface.lib.jsGenerated runtime bundle built fromsrc/lib.js.src/node.htmlSource editor template that includes the per-family fragments.node.htmlGenerated editor asset built fromsrc/node.html.examples/Import-ready Node-RED flows.test/Node test coverage for editor behavior, auth routing, method mapping, and websocket lifecycle behavior.
Development
npm install
npm run build
npm test
Generated files are part of the project:
node.htmlis built fromsrc/node.htmllib.jsis built fromsrc/lib.js
If you change source templates or runtime source files, rebuild before review or release.
Contributing
Contributions are welcome. Keep changes clear, intentional, and proven.
Please include:
- a clear scope and rationale
- tests for behavior changes
- a short plain-language comment block at the top of each test file you add or touch
- doc updates when user-facing behavior changes