This document provides a comprehensive guide to utilizing the “Send Response” node within the SigmaMind AI agent builder platform. This node is crucial for enabling your AI agent to communicate effectively with users across various channels.


Purpose

The “Send Response” node facilitates sending AI-generated responses back to the user or customer. It acts as the primary mechanism for the AI to convey information, answer questions, or provide updates during a conversation.


Channel Versatility

The platform intelligently handles the delivery of responses based on the originating channel of the conversation. Whether the interaction began via chat, email, voice, SMS, WhatsApp, or Slack, the “Send Response” node ensures the AI’s message is delivered back to the user on the same channel.


Response Types

The “Send Response” node offers two primary methods for generating responses:

1. Static Response

A static response allows you to define an exact message that will be sent to the user without any modification by an AI model.

Usage:

  • Type the desired response directly into the designated input field.
  • This response will be delivered verbatim, whether spoken by the AI in a voice call or displayed as text in chat/email.

Example:

“Thank you for contacting support. How can I assist you further?“

2. Prompt Response

The prompt response option leverages a Large Language Model (LLM) to paraphrase or generate a message based on provided instructions. This allows for more dynamic and context-aware responses.

Usage:

  • Select the “Prompt” option for the response type.
  • Enter your desired message or instructions into the response box. The LLM will use these as a basis for generating the final output.

Example:

If your input is “Confirm the user’s request for a refund,” the LLM might generate: “I’ve received your request for a refund. We’ll process it shortly.”

Adding Instructions for Prompt

You can provide specific instructions within the response box to guide the LLM’s paraphrasing. These instructions help ensure the generated message aligns with your desired tone and style.

Example Instructions:

  • “Keep it polite and concise.”
  • “Be empathetic and helpful.”
  • “Summarize the key points in bullet form.”

Auto-Response and Macro Options

The “Send Response” node provides additional options for managing response delivery and leveraging pre-configured templates:

Auto-Response vs. Draft

  • Auto-Response: When selected, the AI’s response is sent automatically to the user.
  • Draft: If “Draft” is chosen, the response is prepared but not sent immediately. This can be useful for human agents to review and approve before sending.

Select Macro

This feature allows you to integrate pre-configured templates from your account, particularly useful when the platform is connected to a helpdesk system.

Usage:

  • Choose a pre-defined macro from the “Select Macro” dropdown.
  • Macros can be sent as-is or used as guidance for the prompt option, allowing the LLM to adapt the macro’s content based on the conversation context.

Advanced Settings

The “Send Response” node includes advanced settings for more complex communication scenarios:

Switch Channel

This powerful feature enables you to send a response on a different communication channel than the one the conversation is currently active on.

Usage:

  • During a voice call, you can use “Switch Channel” to send an SMS message to the user.
  • This allows for multi-channel engagement within a single conversation flow.

Include Brand Persona

When toggled on, this setting ensures that the AI’s responses align with your brand’s desired tone and style.

Usage:

  • The response instructions, along with any pre-defined brand persona instructions, are sent to the LLM.
  • The LLM then generates a message that reflects your brand’s voice, whether it’s formal, casual, empathetic, or humorous.