This document provides a comprehensive guide to utilizing the “Send Response” node within the SigmaMind AI agent builder platform. This node is crucial for enabling your AI agent to communicate effectively with users across various channels.
The “Send Response” node facilitates sending AI-generated responses back to the user or customer. It acts as the primary mechanism for the AI to convey information, answer questions, or provide updates during a conversation.
The platform intelligently handles the delivery of responses based on the originating channel of the conversation. Whether the interaction began via chat, email, voice, SMS, WhatsApp, or Slack, the “Send Response” node ensures the AI’s message is delivered back to the user on the same channel.
The prompt response option leverages a Large Language Model (LLM) to paraphrase or generate a message based on provided instructions. This allows for more dynamic and context-aware responses.Usage:
Select the “Prompt” option for the response type.
Enter your desired message or instructions into the response box. The LLM will use these as a basis for generating the final output.
Example:If your input is “Confirm the user’s request for a refund,” the LLM might generate: “I’ve received your request for a refund. We’ll process it shortly.”
You can provide specific instructions within the response box to guide the LLM’s paraphrasing. These instructions help ensure the generated message aligns with your desired tone and style.Example Instructions:
Auto-Response: When selected, the AI’s response is sent automatically to the user.
Draft: If “Draft” is chosen, the response is prepared but not sent immediately. This can be useful for human agents to review and approve before sending.
This feature allows you to integrate pre-configured templates from your account, particularly useful when the platform is connected to a helpdesk system.Usage:
Choose a pre-defined macro from the “Select Macro” dropdown.
Macros can be sent as-is or used as guidance for the prompt option, allowing the LLM to adapt the macro’s content based on the conversation context.