Context-Aware Chatbot: OpenAI Integration with Flow Memory
Context-Aware Chatbot: OpenAI Integration with Flow Memory
🚀 Hello Node-RED enthusiasts! Sharing how we can natively introduce chat memory to AI-powered conversations with this "Context-Aware Chatbot" demo flow. This approach bridges Node-RED's simplicity with the versatility of OpenAI-compliant API servers, bringing a level of ease and innovation that stands out amidst tools like Flowise, Langflow, and LangChain.
Why This Flow?
- Adaptable Context Memory: Smartly keeps track of conversations, ensuring relevant and context-rich interactions.
- OpenAI Flexibility: Tap the power of OpenAI's native platform or point to any OpenAI-compliant API server, offering unparalleled dynamic interactions--works great with LM Studio and Text Generation Web UI.
- Interactive Testing: Includes intuitive inject nodes to let you easily simulate and explore various conversation scenarios.
How You Can Use It:
- Import this flow into your Node-RED environment.
- Add the @inductiv/node-red-openai-api node for seamless connectivity with OpenAI's API or other compatible servers.
- Experiment by injecting messages to see the chatbot's intelligent contextual responses.
- Use the clear memory function to start fresh conversations whenever you need.
🚀🌐🤖
Let's make Node-RED the tool for advanced AIoT development!
[{"id":"bfb3bf6a21067928","type":"tab","label":"Flow-based Chat Memory","disabled":false,"info":"","env":[]},{"id":"9efb9986abde06ea","type":"debug","z":"bfb3bf6a21067928","name":"response","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload.messages","targetType":"msg","statusVal":"","statusType":"auto","x":1020,"y":300,"wires":[]},{"id":"7372d6ef8decd6d9","type":"inject","z":"bfb3bf6a21067928","name":"1. System Message","props":[{"p":"payload.messages","v":"[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"Hi! Can you tell me when daylight savings time 2023 ends in the continental United States?\"}]","vt":"json"},{"p":"payload.model","v":"gpt-4-1106-preview","vt":"str"},{"p":"payload.temperature","v":"0.9","vt":"num"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","x":350,"y":260,"wires":[["280368fb2958c21e"]]},{"id":"b9ef39c889acb3bd","type":"OpenAI API","z":"bfb3bf6a21067928","name":"Chat Completion","service":"6bf3be50f1f3a262","method":"createChatCompletion","x":750,"y":380,"wires":[["280368fb2958c21e"]]},{"id":"280368fb2958c21e","type":"function","z":"bfb3bf6a21067928","name":"Chat Memory","func":"let messages = flow.get('messages') || [];\n\nif (msg.payload.object){\n // this is a chat completion response\n messages.push(msg.payload.choices[0].message);\n} else {\n // this is an incoming user message\n messages.push(...msg.payload.messages)\n delete msg.payload.messages; // no longer needed beyond this point.\n}\n\nflow.set('messages', messages);\nmsg.payload.messages = messages;\nreturn msg;\n","outputs":1,"timeout":0,"noerr":0,"initialize":"","finalize":"","libs":[],"x":620,"y":300,"wires":[["b480d24f28b70590"]],"icon":"font-awesome/fa-microchip"},{"id":"4f754fb284b1b42a","type":"inject","z":"bfb3bf6a21067928","name":"Clear","props":[],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","x":570,"y":520,"wires":[["28dbac874c3c379a"]]},{"id":"b482bcbe7bbe5d56","type":"debug","z":"bfb3bf6a21067928","name":"response","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":920,"y":520,"wires":[]},{"id":"28dbac874c3c379a","type":"function","z":"bfb3bf6a21067928","name":"Clear Memory","func":"flow.set(\"messages\", []);\n\nreturn msg;","outputs":1,"timeout":0,"noerr":0,"initialize":"","finalize":"","libs":[],"x":740,"y":520,"wires":[["b482bcbe7bbe5d56"]]},{"id":"b480d24f28b70590","type":"switch","z":"bfb3bf6a21067928","name":"Continue or Done","property":"payload.object","propertyType":"msg","rules":[{"t":"eq","v":"chat.completion","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":2,"x":830,"y":300,"wires":[["9efb9986abde06ea"],["b9ef39c889acb3bd"]]},{"id":"13f06252dc61ff81","type":"inject","z":"bfb3bf6a21067928","name":"2. User Message 1","props":[{"p":"payload.messages","v":"[{\"role\":\"user\",\"content\":\"Thanks! How about in 2024?\"}]","vt":"json"},{"p":"payload.model","v":"gpt-4-1106-preview","vt":"str"},{"p":"payload.temperature","v":"0.9","vt":"num"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","x":350,"y":300,"wires":[["280368fb2958c21e"]]},{"id":"6c165a067b733835","type":"inject","z":"bfb3bf6a21067928","name":"3. User Message 2","props":[{"p":"payload.model","v":"gpt-4-1106-preview","vt":"str"},{"p":"payload.temperature","v":"0.9","vt":"num"},{"p":"payload.messages","v":"[{\"role\":\"user\",\"content\":\"Got it. Thanks! Does this happen around the same time every year?\"}]","vt":"json"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","x":350,"y":340,"wires":[["280368fb2958c21e"]]},{"id":"3fa240e52c50090b","type":"comment","z":"bfb3bf6a21067928","name":"Requirement: ☝️ @inductiv/node-red-openai-api node.","info":"Install via the Node-RED Palette Manager or via NPM:\nnpm install @inductiv/node-red-openai-api\n\n## Note\nNode works with any OpenAI-compliant API server.","x":740,"y":440,"wires":[],"icon":"font-awesome/fa-info-circle"},{"id":"86136c735f777368","type":"comment","z":"bfb3bf6a21067928","name":"Simulate conversation sequence: 1, 2, 3...","info":"","x":400,"y":220,"wires":[]},{"id":"3195dd1fc85413d5","type":"comment","z":"bfb3bf6a21067928","name":"Demo: OpenAI Chat Completion with Flow-based Conversation Memory","info":"This is a simple demos highlighting native, flow-based chat completion conversation memory.","x":690,"y":160,"wires":[]},{"id":"6bf3be50f1f3a262","type":"Service Host","apiBase":"https://api.openai.com/v1","secureApiKeyHeaderOrQueryName":"Authorization","name":"OpenAI Auth"}]