Features

Creativity Level (Temperature)

Controls the level of randomness in the model's responses. A lower value (e.g., 0.2) makes the responses more focused and deterministic, meaning the model will be more likely to choose the most probable next word. A higher value (e.g., 0.8) increases randomness and creativity, allowing the model to explore more diverse and less probable word choices.

Max Tokens Limit (Maximum Tokens)

Sets a limit on the number of tokens (words or subwords) that the model can generate in its response. It helps control the length of the output.

Probability Cuttoff (Top P Probability)

Adjusts the diversity of the response by sampling from the top P percentage of the probability mass. For instance, if set to 0.9, the model considers only the smallest set of words whose cumulative probability is 90%. This technique is known as nucleus sampling.

Log Probability

When enabled, returns the log probabilities of the generated tokens. This can be useful for understanding how confident the model is in its choices.

Repetition Penalty (Frequency Penalty)

Applies a penalty to the model for using tokens that appear frequently. It reduces the likelihood of the model repeating the same words, promoting more varied language.

Novelty Penalty (Presence Penalty)

Similar to the frequency penalty, this discourages the model from repeating tokens that have already appeared in the response, encouraging it to generate new content instead.

ResponseCount Number of Completions

Specifies how many separate completions or responses the model should generate for a given prompt. It allows users to receive multiple responses and choose the best one.

Stop Sequence

Defines specific sequences of characters or words at which the model should stop generating further tokens. It helps to control the endpoint of the generated response.

Tool Choice

Specifies the particular tool the model should use for generating the response. It allows for more targeted and relevant output.

Response Type (Response Format)

Defines the format in which the response will be returned, such as plain text or JSON. It ensures the output is structured in a way that meets the user's needs.

Stop Sequences

An array of sequences at which the model should stop generating tokens. This can be useful for setting multiple end conditions for the response.

STOP (Stop Key)

An arbitrary key used for stopping generation. This key doesn't have a standard use and is provided as an example.

Top K

Limits the sampling pool to the top K most probable tokens. It promotes more deterministic responses by focusing on the most likely next words.

Advance parameter
Apr 8, 2025

Overview

The Testcases feature in GTWY AI is designed to evaluate how well the system prompt and configuration perform against an expected response. This enables users to evaluate and refine their prompts for improved accuracy when calling LLM service APIs like OpenAI and Anthropic.

How It Works

For each test case, the user provides:

  1. User Message - The input prompt that the system will process.

  2. Expected Response or Tool Call - The desired output can be a direct response or a tool invocation. We don’t call any tool when running a test case.

You can create a testcase from a bridge history - Users can also create test cases from previous interactions stored in bridge history.

Then, select a bridge version - A specific version of the bridge to run the test cases for.

GTWY AI then calls the selected LLM API with your version configuration and evaluates the response using one of three matching methods:

Matching Methods

  1. Exact Matching - Compares both the type and value of the expected and actual response for a precise match.

  2. AI Matching - Uses another LLM to assess the accuracy of the actual response relative to the expected one and provides a score.

  3. Similarity Matching - Measures the similarity between the expected and actual response using cosine similarity and provides a score.

Score Evaluation

GTWY AI also displays the user's previous version scores, allowing them to compare current results with past performances. When running test cases in a specific version, users can see past scores alongside new scores to track improvements and adjust their prompts accordingly. Instead of a pass/fail verdict, the system provides users with a score based on the chosen matching method. This score helps users gauge how closely the actual response aligns with the expected output, allowing for iterative improvements. GTWY AI also displays the user's previous version scores, allowing them to compare current results with past performances. This score helps users gauge how closely the actual response aligns with the expected output, allowing for iterative improvements.

Testcases
Apr 8, 2025

Introducing RAG: Retrieval-Augmented Generation

Have you ever faced a situation where you're handed a massive document and asked to “just find what’s relevant”? Yeah, we’ve all been there—and let’s be honest, nobody has time to read through pages of documentation just to get one answer. That’s exactly where Retrieval-Augmented Generation (RAG) steps in as your trusty sidekick.

RAG is a powerful feature designed to simplify information retrieval by letting you add knowledge bases in various formats. Once added, these knowledge bases can be used to answer queries and fetch data instantly—saving you tons of time and effort.


🔍 How RAG Works

RAG operates on a smart mechanism called chunking. Here’s the gist:

  1. Large documents or data sources are broken down into smaller, more manageable pieces called chunks.

  2. These chunks are created meaningfully—ensuring that each one carries a coherent portion of information.

  3. When a user asks a question, RAG scans through these chunks to find the most relevant ones.

  4. The filtered chunks are then passed to the AI, which analyzes them and generates a precise, accurate response.

So instead of slogging through a 100-page document, you get the answer you need in seconds. Neat, right?


🧠 Adding Knowledge Bases to RAG

RAG supports multiple methods for adding your knowledge base:

  • File Upload
    You can upload documents in various formats like .docx, .pdf, .csv, and .txt. Especially for .csv files, RAG applies specialized analysis to create more meaningful chunks.

  • URL to Docs
    Add links to Google Sheets or other online documents. RAG can directly access and process data from these URLs.

  • Website Crawling
    Want to feed your knowledge base from a website? Just provide the URL, and RAG will crawl through the site and fetch relevant content.

  • YouTube Video Crawling
    Yes, you read that right—RAG can even extract and analyze content from YouTube videos. So your video tutorials or product demos can also serve as knowledge sources.

With this flexibility, you have the freedom to build a robust knowledge base from diverse sources.


🧩 The Art of Smart Chunking in RAG

Let’s talk about how RAG creates these intelligent chunks. It uses three different methods:

  • Recursive Chunking
    This ensures that no chunk exceeds a specified maximum size. Chunks can be smaller but never larger, maintaining consistent, digestible bits of data.

  • Semantic Chunking
    Here, RAG analyzes the entire document in one go and breaks it down based on semantic relationships—grouping together information that “makes sense” together.

  • AI-Based Chunking
    In this method, AI determines the most meaningful breakpoints based on the content and structure of the document. It’s adaptive, smart, and highly contextual.


🎯 Wrapping Up

RAG is your go-to tool when you're dealing with heavy documents and tight deadlines. Whether it’s customer support, internal documentation, product manuals, or technical references—RAG lets you turn those documents into intelligent, searchable knowledge bases.

Stop wasting time scrolling, and start asking. Let RAG do the heavy lifting.

Knowledge Base
Apr 8, 2025

Overview

As Large Language Models (LLMs) like GPT evolve, they’re no longer just text generators — they’re becoming intelligent agents capable of performing tasks, fetching real-time data, running calculations, and even generating images. One of the key features powering this evolution is the concept of a Connect apps .

🔌 How to Connect?

Connecting a function, API, or app is super simple. Just follow these steps:

  1. Look Below the Prompt:
    Right under the chat or input prompt, you’ll find the option “Connect Function” — click on it.

    Screenshot 2025-04-15 at 3.15.34 PM.png

    Choose from the List:
    A list of all available functions, APIs, or apps will appear. You can select the one you want to connect.

  2. Want to Add a New Function?
    No worries! There’s also an “Add New Function” button right in that list. Clicking it opens up options like:

    • Connecting to an existing app

    • Adding a custom JavaScript (JS) function

    • Integrating an API

    • Designing a Flow to handle complex tasks

  3. Build and Publish:
    Once you’ve created your function, API call, or flow — Save/Publish it.
    ✅ After publishing, your new function will automatically show up in the “Connect Function” list — ready to use anytime.

Why Function Calls Are Useful

  1. Access Real-Time Data
    → Example: Fetch current weather, stock prices, sports scores, etc.
    (web tool or API fetch)

  2. Perform Complex Computations
    → Example: Calculate large mathematical operations, generate plots, or run Python code.
    (python tool or code interpreter)

  3. Generate or Edit Images
    → Example: Create diagrams, generate AI images, edit uploaded images.
    (image_gen tool)

  4. Handle Structured Data
    → Example: Read, write, and manipulate CSV or Excel files.
    (bio, ace_tools.display_dataframe_to_user)

  5. Connect to External APIs
    → Example: Call a travel API to book tickets or query a database.

Why Should You Care?

  • Unlocks LLM Superpowers 🦸
    → Beyond text generation, tools let LLMs interact with the real world, APIs, and external systems.

  • Dynamic and Useful
    → Combines reasoning with real-world action (like a personal assistant that can both think and do).

Connect Apps
Apr 15, 2025

Maximizing Efficiency with Batch API: How

GTWY.AI

Helps You Manage Your Requests and Responses

In today’s fast-paced digital world, efficiency is key. Especially when working with complex systems like Large Language Models (LLMs), managing multiple requests can be a daunting task. This is where Batch APIs come into play, offering a more streamlined way of sending requests without expecting an immediate response. In this blog, we will explore the benefits of using Batch APIs and how

GTWY.AI

can help you efficiently track requests and responses.

What is a Batch API?

A Batch API allows users to send multiple requests to a server in one go, without needing an immediate response for each request. This is particularly useful when dealing with systems like LLMs (Large Language Models) where each request can take time to process. Instead of waiting for each individual response, requests are grouped together and sent as a batch.

The server processes these requests and sends the responses back in due course. This allows users to continue other tasks without being stalled by each request’s processing time.

Why Use a Batch API?

Using a Batch API can significantly enhance efficiency in various scenarios. Here’s why:

  1. Time Efficiency: When dealing with a large volume of requests, waiting for individual responses can waste a lot of time. A Batch API allows you to submit requests in bulk, saving you the hassle of waiting for each individual result.

  2. Reduced Server Load: Sending requests in bulk can reduce the number of HTTP connections your server needs to handle, making the system more efficient overall.

  3. Asynchronous Processing: Batch APIs enable asynchronous processing, meaning you don't need to block your workflows while waiting for a response.

  4. Scalability: Batch APIs are ideal for scaling your operations, especially when you're dealing with high volumes of data or numerous requests

  1. Cost Efficiency: One of the most compelling reasons to use a Batch API is the significant reduction in cost. For most Large Language Models (LLMs), Batch APIs can reduce the overall cost by nearly half. Since you're submitting requests in bulk rather than individually. 

Tracking Responses with

GTWY.AI

While Batch APIs can be time-saving, they come with their own challenges. One of the biggest hurdles is keeping track of the responses. Since you won't be receiving responses in real-time, you need a robust system to log and track responses as they come in.

GTWY.AI

is designed to handle exactly this challenge.

When you send requests to

GTWY.AI

via the Batch API format, all you need to do is provide a webhook. This webhook is where

GTWY.AI

will send the response once the server has processed it. The platform will handle all the logistics of keeping a log of your requests and responses, so you don’t have to set it up yourself.

How It Works:

  1. Submit a Request: You send your requests to

    GTWY.AI

    in batch format, alongside a webhook URL.

  2. Processing:

    GTWY.AI

    forwards the requests to the appropriate server. The server processes them asynchronously.

  3. Track Responses: As the server sends back the responses,

    GTWY.AI

    logs them and hits your specified webhook with each response.

  4. No Hassle for You: You don’t need to worry about tracking, logging, or handling the response.

    GTWY.AI

    takes care of all of that, allowing you to focus on other important tasks.

    batchapi (3).png


Why Choose

GTWY.AI

for Your Batch API Needs?

Using

GTWY.AI

to manage your Batch API requests saves you both time and resources. Instead of manually tracking each request and handling responses individually, our platform simplifies the entire process.

GTWY.AI

automatically logs requests, tracks responses, and ensures you’re always in the loop.

This means less effort on your part, a more efficient workflow, and the ability to scale your operations without getting bogged down by administrative tasks.

Conclusion

Batch APIs are a great way to handle large volumes of requests without overwhelming your system or wasting precious time. With the added bonus of

GTWY.AI

handling the logging and tracking of your requests and responses, you can save time, resources, and effort. So, if you’re ready to take your API game to the next level, try out

GTWY.AI

and experience the efficiency of Batch APIs today!

Feel free to explore how

GTWY.AI

can improve your Batch API workflows and make the entire process seamless and effortless. Happy batching!

Batch API
Apr 8, 2025


🔔 What Are Triggers in GTWY AI?

Triggers are event-based entry points that start a workflow in GTWY AI.

In simple terms:

A Trigger listens for something to happen — like a webhook call or a Shopify event — and when that happens, it activates your AI logic (called a Bridge).

For example:

  • An order is placed on Shopify → trigger fires → LLM summarizes the order.

  • A form is submitted → trigger fires → GTWY sends a response email using AI.

  • A webhook sends customer feedback → trigger fires → sentiment analysis happens using Claude or GPT.


❓ Why Do We Need Triggers?

Without triggers, your AI logic would just sit idle, waiting for manual inputs.

Triggers allow your AI logic to:

  • Respond automatically and instantly to real-world events.

  • Enable real-time AI automation.

  • Create seamless integration with third-party tools (Shopify, Airtable, Slack, etc.).

🧠 Analogy:

Think of a trigger like a doorbell:

  • Someone presses it (event occurs),

  • You hear the sound (workflow starts),

  • You respond (the LLM runs your prompt).


🔁 How to Integrate Triggers in GTWY AI?

✅ Step-by-step Guide:

1. Create or Open a Bridge

  • Go to your GTWY dashboard.

  • Click on “Bridges” → Create a new one (name it anything like "Order Summary").

2. Select Trigger Type

  • Choose “Triggers” as the mode (not API, Chatbot, or Batch API).

  • This means your prompt will be fired by some external event.

3. Click “+ Connect Trigger”

  • A side panel opens showing many integrations like:

    • Webhook

    • Shopify

    • Slack

    • Razorpay

    • Airtable

  • Select the one relevant to your use case.

triggers.png

4. Configure the Trigger

  • Example for Webhook:

    • GTWY gives you a unique URL.

    • You can send a POST request to that URL with JSON data.

  • Example for Shopify:

    • Choose an event like order_created.

    • Authenticate your store.

    • GTWY listens to that event.

5. Write Your Prompt

  • Define what AI should do when the trigger fires:

    Generate a short summary of this customer order using natural language.
    

6. (Optional) Add Pre Functions

  • Clean or manipulate incoming data before it reaches your prompt.

7. Deploy & Test

  • Activate the bridge.

  • Send a test event or webhook.

  • Check if GTWY’s prompt is executed and returns output.


🧪 Example: Webhook Trigger Use Case

Imagine you want to summarize support tickets using GPT-4.

Here’s how it works:

  1. Connect a Webhook Trigger.

  2. GTWY gives you a URL:
    https://trigger.gtwy.ai/webhook/abc123

  3. Your external app sends:

    {
      "ticket": "The app keeps crashing when I open it."
    }
    
  4. Prompt:

    "Summarize this ticket in less than 20 words. Suggest a possible cause."

  5. GTWY executes it and gives:

    "User reports app crashing on startup — possible bug in initialization sequence."


✅ Summary

Concept

Meaning

Trigger

An event listener that starts your GTWY AI workflow

Why needed

To automate real-time responses using AI, without manual input

How to use

Connect → Choose source (Shopify/Webhook/Slack) → Write prompt → Done!


Want me to give a real working example with curl or Shopify integration step-by-step?

Triggers
May 17, 2025

Step-by-Step Guide to Dynamic Payloads in Chat Completion API


📌 Step 1: What is a Static Chatbot Payload?

In many chatbot systems, a fixed (static) payload is sent to the API, like:

json
{
  "org_id": "/org_id in integration guide",
  "chatbot_id": "//chatbot_id in intergration guide",
  "user_id": "USER_123", //can be any arbitary value
  "variables": { //not compulsory 
    "name": "abc"
  }
}

This is simple, but not flexible. You can’t change the model, prompt, or tools without editing the code.


🔁 Step 2: Why Do We Need a Dynamic Payload?

Real-world apps need to:

  • Change model (gpt-4o, gpt-3.5, etc.)

  • Update prompts based on user language or logic

  • Use function calling (tools)

  • Adjust temperature / token limits

  • Keep some fields optional

Instead of hardcoding, we use a dynamic payload — a JSON that is constructed at runtime.


🛠️ Step 3: Building a Dynamic Payload Step-by-Step

Here’s a sample dynamic structure you can send:

json

{
  "service": "openai",
  "configuration": {
    "model": "gpt-4o",
    "type": "chat",
    "prompt": "Act as a bot JSON",
    "max_tokens": "default",
    "creativity_level": "default"
  },
  "apikey": "your-api-key"
}

✅ Fields you can customize:

Field

Purpose

model

AI model to use (e.g., gpt-4o)

prompt

Bot’s behavior / instructions

tools

Functions you want to call

max_tokens

Max length of response

creativity_level

Controls how strict or creative it is


🧩 Step 4: Add Tools Dynamically (Function Calling)

If your chatbot needs to use tools, like BMI calculation, you can add them only when needed:

json
"tools": [
  {
    "type": "function",
    "id": "66aa1347f6048aaaf9e5d34b",
    "name": "scriSybL2C2K",
    "description": "BMI Res",
    "required": ["BMI"],
    "properties": {
      "BMI": {
        "type": "string"
      }
    }
  }
]

✅ You only add this section when tool-based replies are needed.


🧠 Step 5: Optional Fields = Clean JSON

You don’t have to send everything all the time.

Examples of optional fields:

  • system_prompt_version_id

  • log_probability

  • repetition_penalty

  • response_count

Only send them when required. This keeps your payload light and dynamic.


✅ Final Words

Using dynamic payloads gives you power to build smart, scalable, and flexible chatbots. Instead of changing code every time, you just change the JSON payload at runtime.

You now understand:

  • How static vs dynamic payloads work

  • How to build one step by step

  • How to use tools and optional fields

Dynamic Parameter Handling in Chat Completion API Integration
Jun 3, 2025

Instantly Guide Users with Smart Suggestions

The Starter Questions feature allows your chatbot to greet users with helpful example questions or tasks — making it easier for them to know what they can ask.


✨ What are Starter Questions?

Starter Questions are predefined inputs that appear at the start of the conversation when the toggle is enabled.
They are auto-generated based on the context you’ve written in the Prompt field.

These questions help users interact faster, especially when they’re unsure how to begin.


⚙️ How It Works

  • Starter Question Toggle:
    Simply switch it ON to activate this feature for your chatbot interface.

  • Prompt-Driven Suggestions:
    The Starter Questions are created intelligently by referencing your chatbot’s Prompt — i.e., the instruction that defines its purpose.


🧠 Example

Prompt:

“You are a LinkedIn assistant. Generate personalized 1-line comments based on a user’s profile and a given post.”

Resulting Starter Questions:

  • “Write a comment for a leadership post.”

  • “Give a friendly comment on a hiring update.”

  • “Suggest a supportive line for a career growth story.”

These are displayed right beneath the chat input box when users open the chat window.


🧩 Why Use Starter Questions?

  • 🚀 Boost onboarding — users don’t need to think from scratch

  • 🧭 Clarify capabilities — shows what your bot can do

  • 🤖 Encourage exploration — helps increase engagement and trust

  • 🧼 Reduce confusion — avoids blank-page syndrome


✅ Best Practices

  • Write a clear, goal-oriented Prompt so Starter Questions can be meaningful

  • Use action-based language in the prompt (e.g., "Write", "Suggest", "Summarize")

  • Enable this for bots that need guided input (e.g., comment generators, Q&A agents, form-fillers)


Want your users to know exactly how to begin?
Enable Starter Questions on your

GTWY.AI

chatbot and let your Prompt do the talking.

Starter questions
Jun 9, 2025


🛠️ Step 1: Generate a JWT Token

Use the following JSON structure:

{
  "org_id": "YOUR_ORG_ID",
  "chatbot_id": "YOUR_CHATBOT_ID",
   // Add your User Id here,
        "variables": {
            // Add your variables here: "key": "value"
        }
}

Click on your access key.
Sign the payload using the access key to generate your embed token.

Example:

This screenshot displays the embedded chatbot interface in a browser window at 127.0.0.1. The chatbot appears in a centered container with the following elements:

  • Header:

    • Title: Chatbot

    • Subtitle: Smart Help, On Demand

    • Controls: Fullscreen toggle and close (X) icon

Uploaded image

🔗 Step 2: Add Embed Code to Your Product

Paste this into your HTML where you want the chatbot to appear:

<script 
  id="chatbot-main-script"
  embedToken="ENTER_YOUR_EMBED_TOKEN"
  src="https://chatbot-embed.viasocket.com/chatbot-prod.js">
</script>

Once embedded, the chatbot will look something like this:


🧠 Advanced Usage (Send/Receive Data)

📥 Listen for messages:

window.addEventListener('message', (event) => {
  const receivedData = event.data;
});

📤 Send data into the chatbot:

window.SendDataToChatbot({ 
  bridgeName: '<slugName_of_bridge>',
  threadId: '<thread_id>',
  parentId: '<container_id>', // optional
  fullScreen: false,
  hideCloseButton: false,
  hideIcon: false,
  variables: {
    // any dynamic data
  }
});

📋 Parameter Reference

Parameter

Type

Description

Required

bridgeName

string

Slug name of the agent

threadId

string

Unique ID for the conversation

parentId

string

DOM container ID to render inside

fullScreen

boolean

Open chatbot in full screen

hideCloseButton

boolean

Hide the close button

hideIcon

boolean

Hide the icon

variables

object

Additional data to pass to the chatbot


⚙️ Extra Controls

  • Open chatbot manually:

window.openChatbot();
  • Close chatbot manually:

window.closeChatbot();

✨ Final Result

A smart, configurable chatbot — embedded directly into your UI — helping your users in real-time. Simple setup, powerful interaction.

Let me know if you want this blog in Markdown or formatted as a publishable doc!

Embed Chatbot🤖
Jun 9, 2025

Tone & Response Style: How GTWY Talks Back to You

Have you ever noticed how some tools just get you? Like they’re not just responding to what you ask—but how you ask it? That’s the power of Tone & Response Style in GTWY.

Let’s break it down.

What Is “Tone & Response Style”?

In GTWY, Tone & Response Style defines how the system replies to your prompts—not just the content, but the vibe.

Whether you're writing a fun blog, formal report, technical doc, or casual social media post, GTWY adapts its language and attitude to fit your needs.

Why It Matters

Imagine asking a question and getting a response that's:

  • Too robotic 🧊

  • Too casual 😅

  • Or just not "you" 🤷‍♂️

That’s frustrating.

Tone & Response Style solves this by letting you customize the output to match your purpose and audience.

Types of Tones You Can Use

GTWY supports multiple tone styles like:

  • Formal: For research, emails, documentation.

  • Informal: For blogs, casual content, and friendly chats.

  • Professional: For client communication and reports.

  • Persuasive: For marketing and storytelling.

  • Neutral: For balanced, objective responses.

You choose the tone—and GTWY shapes the response to fit it.

Final Thoughts

Tone & Response Style isn’t just a setting—it’s a personality switch for your AI assistant. Whether you're drafting, editing, or refining your message, this feature ensures it always sounds just right.

Go ahead—try writing a prompt and switch the tone. You’ll be amazed at how much the feel of a message can change the impact.

Tone & Response Style
Jun 9, 2025

Choosing the right AI model for your task can feel like picking the right tool in a crowded toolbox. Do you need speed? Creativity? Conciseness? Accuracy?

With Get Recommended Model, GTWY takes out the guesswork and recommends the best model for your prompt automatically.

Let’s see how it works—and why it’s a game-changer.

What Is “Get Recommended Model”?

"Get Recommended Model" is a smart assistant built into GTWY that automatically suggests the most suitable model for your specific prompt.

Instead of manually selecting between models like o4, o3, or o4-mini, you let GTWY analyze your prompt and recommend the one that fits best—based on what you’re trying to do.

Why You’ll Love It

Think of it like autopilot for model selection. It:

  • Speeds up your workflow – No more trial-and-error.

  • Improves result quality – Get responses tailored to your intent.

  • Optimizes performance – Uses lightweight or powerful models depending on the task.

Perfect for users who don’t want to worry about model names but care about output quality.

When to Use It

Use "Get Recommended Model" when:

  • You're unsure which model to pick.

  • You're doing a mix of creative, technical, or analytical tasks.

  • You just want the best answer—without tweaking backend settings.

How It Works (Under the Hood)

Behind the scenes, GTWY looks at your prompt’s:

  • Complexity

  • Purpose (e.g., creative, coding, summarizing, answering)

  • Context length and constraints

Then it matches that to the model best optimized for the job.

How to Use It

Using this feature is as easy as:

  1. Type your prompt.

  2. Click "Get Recommended Model."

  3. GTWY selects the best-fit model instantly.

  4. You run the prompt—or tweak the model if you prefer.

Bonus: You’re Always in Control

While GTWY makes a smart suggestion, you can override it anytime. Think of it like Google Maps suggesting a route—you can still take your favorite shortcut.

Final Thoughts

Whether you’re writing a poem, debugging code, or drafting a product plan, Get Recommended Model makes your experience smoother and smarter.

Don’t stress about picking the “right” model anymore—just write your prompt, and let GTWY do the rest.

Recommended Model
Jun 9, 2025

Connect Agent is a feature that lets you link your AI assistant directly to the specific agent you want to work with. Instead of juggling multiple tools or switching platforms, you can pick an agent tailored to your task and connect with it instantly.

This makes it easier to get focused, relevant help because your chosen agent understands exactly what you need and responds accordingly.

How to Use Connect Agent in 5 Easy Steps

Sometimes, you need a little extra help from a specialized AI assistant—whether it’s for coding, writing, or customer support. That’s where the Connect Agent feature comes in handy.

With Connect Agent, you can quickly link up with the right AI agent tailored to your needs, making it easier than ever to get expert assistance without any hassle.

Ready to connect your agent and get started? Just follow these simple steps:

Step 1: Click the Connect Agent

First, find and click the Connect Agent button. This opens up the list of agents available for you to connect with.

Step 2: Choose Your Agent

Pick the agent you want to use—the one that can best assist you with your task or question.

Step 3: Provide the Description

Your agent may ask for a description or some details. Just fill in the info as required so your agent understands what you need help with.

Step 4: Mention Your Agent in Your Prompt

When you write your prompt, make sure to mention your chosen agent so it knows to respond and assist you.

Step 5: You’re Ready to Go!

That’s it—your agent is now connected and ready to help. Start typing your questions or commands, and watch your agent work its magic.

Why Connect Agent?

This feature makes it super easy to get personalized assistance anytime, without switching apps or complicated setups.

Connect Agent
Jun 9, 2025

Version: Evolve Your Agent Without Losing the Original

Building an AI agent is a creative and iterative process. But what if you want to try something new with your agent without messing up the one that already works?

That’s exactly what the Version feature is for.

What Is the Version Feature?

The Version feature allows you to create a new version of an existing agent—without affecting the original. This means you can:

  • Build and test new ideas.

  • Make improvements.

  • Add different behaviors.

All without disturbing the current agent you’ve already created.

Why Use Versions?

Let’s say you’ve built an agent that handles customer queries. Now, you want to train a more advanced version for technical support—but you don’t want to risk breaking or changing the current one.

Instead of duplicating everything manually, just create a new version of the same agent. It acts like a clone you can update, customize, and experiment with—independently.

How It Works

  1. Open your existing agent.

  2. Click Create New Version.

  3. Modify this version as needed (change behavior, add features, update prompts).

  4. You now have two separate agents under one original idea.

Both can work side-by-side—and you choose which one to use.

When to Use Version

  • ✅ When testing improvements without affecting the live agent.

  • ✅ When you need two agents for similar but slightly different tasks.

  • ✅ When you're collaborating and want to keep your own iteration.

  • ✅ When building a backup or rollback option.

Final Thoughts

With the Version feature, you no longer have to choose between progress and stability. You can build boldly, experiment freely, and still keep your original agent safe and running.

Try it out: Create a new version today and take your agent to the next level—without ever starting from scratch.

Version
Jun 10, 2025

What is pre-built tool?

AI-powered pre-built tools are designed to help individuals, businesses, and organizations quickly and effectively retrieve real-time information from the internet. These tools automatically gather and process up-to-the-minute data, providing users with the latest insights on current events, verifying facts, and answering time-sensitive questions. By leveraging the power of AI, these tools ensure fast, accurate, and efficient information retrieval, enabling better decision-making and improved productivity. From news tracking to fact-checking and real-time queries, pre-built AI tools are transforming industries by automating the way we access and use information.

Screenshot 2025-06-10 121240.png

Harnessing the Power of Pre-built AI Tools for Real-Time Information Retrieval

In today’s rapidly evolving world, staying informed in real-time is more critical than ever. Whether it's tracking the latest news, verifying facts, or answering urgent questions, having access to real-time data is invaluable. Traditional methods of information retrieval often rely on static data sources, which may become outdated or fail to address the most immediate needs.

This is where AI-powered pre-built tools come into play. These tools are designed to fetch real-time information from the internet, empowering businesses, researchers, and individuals to stay ahead of the curve. Let’s explore how these pre-built tools are transforming industries by enabling faster, more accurate access to up-to-date data.


1. Real-Time Data for Current Events Tracking

The speed at which information spreads today makes it challenging to keep up with fast-moving developments. From global politics to breaking news in tech and entertainment, being able to track current events as they unfold is essential.

AI pre-built tools designed for real-time data retrieval can scan news sources, social media platforms, and public databases to provide up-to-the-minute information on events as they happen. This functionality helps journalists, analysts, and decision-makers access the most current data, ensuring that they’re always working with the latest insights.

  • Example Use Case: A media company could deploy an AI tool to track global events like elections or natural disasters. The AI would fetch the latest reports, social media posts, and government updates to deliver a comprehensive view of the event in real time.


2. Fact-Checking with AI

In the era of misinformation, fact-checking has become an essential task. With the sheer volume of content shared online every minute, ensuring the credibility of information can be daunting. This is where AI tools come to the rescue.

Pre-built AI tools can automatically fetch real-time data from trusted sources to validate claims and verify facts. By scanning authoritative websites, news outlets, and databases, these tools can cross-reference statements and provide quick, accurate insights.

  • Example Use Case: Fact-checking organizations like PolitiFact or Snopes can integrate AI to cross-reference claims made by public figures or news outlets. The AI tool would instantly compare statements against multiple trustworthy sources and flag discrepancies.


3. Handling Time-Sensitive Questions

Time is of the essence in various industries, especially when dealing with time-sensitive questions. Whether you need to retrieve the latest stock prices, health alerts, or weather updates, having access to live data can make a world of difference.

AI tools equipped with real-time data fetching capabilities can instantly answer questions based on the latest information available on the internet. These tools ensure that users receive responses that are not only quick but also relevant and accurate.

  • Example Use Case: In the finance industry, traders can use AI tools to pull live stock prices and market updates to inform their investment decisions. Similarly, healthcare professionals can rely on AI to access the latest research on diseases or medical treatments to make informed decisions.


4. Benefits of Pre-

built AI Tools for Real-Time Data

AI tools designed to fetch real-time information offer several advantages over traditional methods of data retrieval:

  • Speed: AI can process and analyze vast amounts of real-time data in seconds, providing immediate answers to pressing questions.

  • Accuracy: These tools pull information from reliable, up-to-date sources, ensuring that the data retrieved is accurate and trustworthy.

  • Efficiency: By automating the data-fetching process, these tools eliminate the need for manual searches, freeing up time for more critical tasks.

  • Scalability: AI can handle large volumes of queries and data, making it scalable for businesses, governments, and organizations that need real-time information for thousands of users.


5. Industries Transforming with Real-Time AI Tools

Pre-built AI tools are making waves across multiple industries, enabling businesses and individuals to make data-driven decisions in real time:

  • Media and Journalism: AI tools are enabling journalists to track breaking news and access real-time updates without manually searching through multiple sources.

  • Healthcare: Doctors and medical researchers can pull the latest clinical trials, research papers, and drug availability in real time to enhance patient care.

  • Finance: Traders and financial analysts are using AI to pull live market data, making quick decisions based on up-to-the-minute market conditions.

  • Retail: E-commerce businesses leverage AI to analyze customer behavior and market trends in real time, allowing them to personalize offers and stock inventory efficiently.


6. The Future of Real-Time Data Fetching

As technology advances, the potential for AI tools to enhance real-time information retrieval grows exponentially. In the future, we can expect even smarter AI systems capable of understanding context, filtering relevant data, and providing more personalized insights. Whether it’s enhancing automation, improving customer experiences, or boosting operational efficiency, the ability to access real-time data will continue to be a game-changer for businesses and individuals alike.


Conclusion:

AI-powered prebuilt tools that fetch real-time data are a valuable asset in the digital age. They empower users to stay informed, make accurate decisions, and maintain efficiency—whether it’s tracking current events, verifying facts, or answering time-sensitive questions. These tools have proven to be indispensable across industries, providing a competitive edge and ensuring that professionals and organizations are always working with the most current and reliable information available.

By embracing these powerful tools, businesses and individuals can leverage the vast potential of real-time data to enhance their productivity, decision-making, and overall success in today’s fast-moving world.

Pre-Built Tool
Jun 11, 2025


What is an Action in the Context of Sending Data and Responses?

An action in the context of chatbots refers to the task of sending data to the frontend (client-side) or generating a response in a new message that is sent back to the user. Actions are essential for chatbots to provide meaningful, real-time interactions with users by fetching, processing, and displaying data, or by delivering responses based on user input.

Role of Actions in Chatbots

1. Sending Data to Frontend

In chatbot systems, actions are responsible for sending data to the frontend to update the user interface or provide information that is relevant to the user’s request.

  • Example: If a user asks for their order status, the action sends the order data to the frontend, which displays the order details to the user.

2. Sending a Response in a New Message

Actions are used to generate a new message that is sent to the user as a response. This response can include simple text, interactive elements like buttons, or even multimedia such as images or carousels.

  • Example: If a user requests the weather, the action generates a message with the weather report and sends it to the user, providing them with up-to-date information.

3. Sending a Response in JSON Format

In more complex chatbot systems, actions can also send responses in JSON format. This allows for structured data to be passed back to the frontend, which can then be rendered or processed as needed. JSON is particularly useful for handling rich media responses, buttons, or other interactive elements that require a structured format.

  • Example: A chatbot could send a JSON response containing both the weather data and interactive buttons for the user to get more details or make a follow-up action.


Why Are Actions Important in Chatbots?

  1. Real-Time Interaction: Actions allow chatbots to engage with users in real-time, providing immediate responses based on the context of the conversation.

  2. Dynamic Responses: Actions make it possible for chatbots to deliver dynamic, context-specific responses, ensuring that users receive accurate and relevant information at the moment.

  3. Automation: Actions enable chatbots to automatically fetch data from external systems, provide user-specific details, or carry out predefined tasks, enhancing efficiency without needing human intervention.


Steps to Connect the Action in a Chatbot

Step 1 . Create an agent.

Step 2. Go to the chatbot configuration.

Step 3. Select add new action.

Step 4. Select an action (E.g, Send data to frontend.).

Step 5. Give the description ( E.g, create button for each response).

Step 6 Give the data structure for the frontend(E.g, data<data>).

Can take the reference of the below image.

Screenshot 2025-06-10 125817.png

Conclusion:

In chatbot systems, actions play a pivotal role by allowing chatbot to send data to the frontend and generate real-time responses. These actions ensure that users receive the appropriate information, enhancing the interaction and user experience. By automating these processes, chatbots can provide timely, relevant, and personalized communication with minimal manual input.

Action
Jun 11, 2025

Step-by-step approach for implementing OAuth2.0-style authentication in GTWY using the given route structure.

🗂️ Overview of Routes

Route

Method

Purpose

/auth_token

GET

Generate or retrieve auth token

/

POST

Save client credentials

/verify

POST

Verify token & issue access

/refresh

POST

Refresh access token


🔐 Step-by-Step OAuth2.0-style Flow


1. Client fetches auth_token

Route

GET /auth_token

Controller Logic: CreateAuthToken

  • Generates a random auth_token (14-character identifier).

  • If auth_token doesn’t already exist in organization’s metadata, it saves it in the DB.

  • Returns:

    {
        "auth_token": "(14-character identifier)"
    }

2. Client saves client_id & redirection_url

Route

POST /

Payload

{
  "client_id": "CLIENT_ID",
  "redirection_url": "https://client-app.com/oauth/callback"
}

Returns

{
  "success": true,
  "message": "Auth token saved successfully"
}

3. Client verifies token and receives access credentials

Route

POST /verify

Payload

{
  "client_id": "CLIENT_ID",
  "redirection_url": "https://client-app.com/oauth/callback"
}

Returns

{
  "success": true,
  "message": "Auth token verified successfully",
  "access_token": "ACCESS_TOKEN",
  "refresh_token": "REFRESH_TOKEN"
}

4. Client refreshes access token using refresh token

Route

POST /refresh

Payload

{
  "refresh_token": "REFRESH_TOKEN"
}

Logic:

  • If valid:

    • Re-issues a new access_token.

  • If invalid:

    • Returns a 401 with message.

Returns

{
  "success": true,
  "message": "Access token refreshed successfully",
  "access_token": "NEW_ACCESS_TOKEN"
}

📌 Notes

  • access_token is typically short-lived and used for authenticated API requests.

  • refresh_token is longer-lived and used to regenerate access_token.

  • You may optionally store token expiry and revoke logic for security.


✅ Summary

Step

Action

Endpoint

1

Client requests auth_token

GET /auth_token

2

Client saves credentials

POST /

3

Client verifies and gets tokens

POST /verify

4

Client refreshes access token

POST /refresh


Oauth2.O
Jun 20, 2025