Skip to content
Skip to content

Bot AI Configuration

Everything you need to shape how your bot thinks, speaks, and acts. All settings live in the Config tab of your bot's cockpit at Studio → Bots → [your bot] → Config.

What is this?

The Config tab is the AI brain settings panel. It controls the bot's identity, the language model it uses, how creative or strict it is, which safety guardrails are active, and how it decides when and how to reply across different channels. Changes saved here take effect on the next message the bot processes — no restart required.

Prerequisites

  • A bot created via Studio → New Bot
  • At least one channel connected in Platforms if you want to test response behavior live

How to open configuration

  1. Go to Studio in the sidebar.
  2. Click your bot's name from the Bots list.
  3. Click the Config tab in the cockpit tab bar.
  4. Make changes to any section.
  5. Click Save Configuration at the bottom of the page.

Tip: Changes are held locally in your browser until you click Save. If you navigate away without saving, your edits are discarded. The save button is disabled while a save is in progress.


Section 1: AI Personality

This section defines who the bot is. The fields here are injected into the system prompt that the language model receives with every request.

Identity fields

FieldWhat it doesLimit
NameDisplay name shown to users2–64 characters, required
DescriptionSystem prompt — the bot's core personality and instructions500 chars in UI (10,000 chars via API)
ExpertiseArea of focus, injected as additional context after the description500 characters
Tone GuidanceSupplementary tone instructions appended to the personality500 characters

The Description field maps to personalityPrompt in the API. Write it as a direct-address system prompt that defines the bot's role, domain knowledge, and behavioral rules. For example: "You are a Solana DeFi expert. You help users understand token mechanics and execute trades. You are concise and precise. Never speculate on price."

A strong Description is the single highest-leverage configuration change you can make. The Expertise and Tone fields refine and supplement it — they do not replace it.

Writing effective personality prompts

Keep the following structure in mind:

  1. Role — what the bot is ("You are a trading assistant...")
  2. Domain — what it knows ("You specialize in Solana DeFi and meme tokens...")
  3. Rules — what it must and must not do ("Always ask for confirmation before trading. Never recommend leverage products.")
  4. Format — how it should respond ("Keep answers under 3 sentences unless asked for detail.")

Section 2: LLM Settings

Controls the language model and its generation parameters.

Model selection

ModelContext WindowBest for
gpt-5-mini128K tokensDefault. Multi-turn conversations, complex strategy questions, long document analysis.
gpt-5-nano32K tokensSimple command bots, high-frequency low-latency use cases, cost-sensitive deployments.

Choose gpt-5-mini unless you have a specific reason to use nano. The larger context window means the bot can hold more conversation history and more knowledge context simultaneously.

Warning: Switching models mid-deployment changes response quality and latency. If you have users actively chatting, schedule model changes during low-traffic periods.

Temperature

The temperature slider runs from 0.0 to 1.0 and controls how deterministic the bot's responses are.

ValueBehaviorUse when
0.0 – 0.3Near-deterministic, consistentPrice alerts, data lookups, command-style bots
0.4 – 0.6BalancedGeneral trading assistants, most use cases
0.7 (default)Moderate creativityConversational bots, community engagement
0.8 – 1.0High variabilityCreative writing, brainstorming — rarely appropriate for trading bots

Practical example: A bot set to 0.2 and asked "what is my SOL balance?" will give a terse, reliable answer every time. The same bot at 0.9 might phrase the answer differently each time and occasionally volunteer unrequested context. For trading commands, stay below 0.5. For community chat bots, 0.6–0.7 works well.

Max Tokens

The maximum number of tokens the model generates in a single response. Valid range: 100–4,096.

  • Lower values (100–500): fast, concise answers; appropriate for command-style bots
  • Mid range (500–1,500): balanced answers with some explanation; good default for most bots
  • Higher values (1,500–4,096): detailed analysis, long-form strategy explanations

The context window (128K or 32K) is the total token budget for input + output. Max Tokens caps only the output portion. Setting it too low truncates answers mid-sentence.

Context Window display

Below the Max Tokens field, the interface shows the model's context window size. A tooltip explains that this is the maximum total tokens (input + output) the model can process in a single request. This is informational — you cannot configure it.


Section 3: Response Mode

Response Mode is a high-level preset that tunes how closely the bot follows its personality prompt versus exercising judgment.

ModeBehaviorBest for
StrictFollows instructions precisely, minimal creativity, lower effective temperatureTrading execution bots, compliance-sensitive contexts
Balanced (default)Mix of precision and natural language flowMost general-purpose bots
CreativeMore expressive and exploratory, higher effective temperatureCommunity bots, entertainment, brainstorming assistants

Response Mode is not the same as Temperature — it adjusts several internal inference parameters at once, not just temperature. Strict mode produces shorter, more direct answers. Creative mode may produce longer, richer responses that deviate further from the exact prompt wording.


Section 4: Guardrails

Toggle switches that protect users and your portfolio from unsafe behavior. Guardrails are enforced at the execution layer — the bot cannot bypass them even if a user asks it to.

GuardrailDefaultWhat it does
Require confirmation for tradesOnBot asks for explicit approval before executing any trade
Block high-risk tokensOnRejects tokens flagged by RugCheck or honeypot detection
Enforce position size limitsOnPrevents individual trades from exceeding configured size thresholds
Allow experimental strategiesOffEnables community-submitted or untested strategy templates
Enable natural language tradingOnAllows trade commands through conversational messages like "buy 0.5 SOL of BONK"

Warning: Disabling "Require confirmation for trades" combined with "Enable natural language trading" means any user who can message your bot can potentially trigger trades. Only do this in fully trusted environments with strong access controls in Platforms.

Policy mode and fees

Each bot has a policyMode that determines its swap fee rate:

policyModeSwap feeWhen to use
standard1% per swapDefault for all bots
enterprise0.5% per swapHigh-volume trading use cases

The policy mode is set at the account level, not in the Config UI — contact support if you need enterprise rate.


Section 5: Response Settings

Fine-grained controls for when and how the bot replies across different channel types.

DM Trigger and Group Trigger

Each channel context (direct messages vs. group chats) has its own trigger policy:

TriggerBehavior
alwaysReplies to every message
mentionOnly replies when the bot is mentioned by name or @handle
commandOnly replies when the message starts with a command prefix
questionOnly replies to messages that end with a question mark or are phrased as questions
keywordOnly replies when the message contains a configured keyword
neverBot never replies in this context

The most common setup is DM Trigger: always (users expect direct replies in DMs) and Group Trigger: mention (reduces noise in busy group chats).

Personalization Level

Controls how much the bot adapts to individual user history:

LevelBehavior
NoneUniform responses for all users
LowMinimal adaptation — remembers a user's preferred tokens
Medium (default)Adapts tone and depth to user conversation history
HighFull personalization — uses conversation patterns to tailor language, examples, and depth

Higher personalization uses more context tokens per request. If you are constrained on Max Tokens, keep this at Low or None.

Context Depth

How many previous messages are included in the prompt when generating a reply. Valid range: 1–50.

A depth of 10 (the default) means the bot remembers the last 10 exchanges. Increase this for complex multi-turn conversations like strategy discussions. Decrease it for high-frequency simple command bots where context is not useful and adds latency.

Rate Limit

Maximum bot replies per minute across all users combined. Set to 0 for unlimited. Use this to control costs on high-traffic bots or to prevent spam abuse in public Telegram groups.

Conversation Timeout

Minutes of inactivity before a user's conversation context is reset. Set to 0 for no timeout. After a timeout, the next message from that user starts a fresh conversation — previous messages are no longer in context. A value of 30–60 minutes is appropriate for most trading bots.


Quick-Reply Templates

Quick-reply templates appear as tappable buttons in Telegram and Discord, giving users one-tap shortcuts to common actions. A maximum of 8 templates per bot.

Field limits

FieldLimit
Label50 characters (the button text)
Message200 characters (sent when the button is tapped)
IconOptional emoji
OrderInteger — determines button display order

Example JSON

Configure templates via the API or import them in the Config UI:

{
  "quickReplyTemplates": [
    {
      "label": "Portfolio",
      "message": "Show me my current portfolio",
      "icon": "📊",
      "order": 1
    },
    {
      "label": "Buy SOL",
      "message": "Buy 0.5 SOL worth of the top trending token",
      "icon": "🚀",
      "order": 2
    },
    {
      "label": "PnL",
      "message": "What is my PnL today?",
      "icon": "📈",
      "order": 3
    }
  ]
}

Keep labels short and action-oriented. The best templates are ones a user would type anyway — you are just removing friction, not adding new commands.


Proactive Learning (Training tab)

The Training tab at /studio/bots/[botId]/training and its sub-route at /training/proactive enable the bot to improve itself from real conversation data.

How proactive learning works

  1. The system scans conversation history within the configured time window.
  2. It identifies patterns: questions the bot answered poorly, topics mentioned frequently, user vocabulary preferences.
  3. It generates improvement suggestions in four categories: topic gaps (new knowledge to add), preference adaptations (tone adjustments), question patterns (new Q&A pairs), and vocabulary updates (personality prompt additions).
  4. Each suggestion appears as a card you can approve, modify, or reject.
  5. Approved suggestions are automatically applied to the knowledge base or personality prompt.

Settings

SettingDefaultNotes
EnabledOffMust be explicitly turned on
AggressivenessBalancedcautious = high confidence required; balanced = moderate threshold; aggressive = lower threshold, more suggestions
Time window30 daysHow far back conversation history is scanned
Min messages50Minimum message count before learning analysis runs
Auto-ApplyOffWhen on, approved-category suggestions apply without your review

Warning: Auto-Apply with Aggressive mode can make unexpected changes to your bot's knowledge base. Start with Cautious + Auto-Apply off until you understand the types of suggestions the system generates for your bot.

Insight categories

CategoryWhat it adds
TopicNew documents or Q&A pairs for frequently asked questions the bot could not answer well
PreferenceAdjustments to tone and response style based on user feedback signals
QuestionNew Q&A pairs derived from recurring question patterns
VocabularyNew phrases appended to the personality prompt to align with your users' language

Saving configuration

After making changes in any section:

  1. Scroll to the bottom of the Config tab.
  2. Click Save Configuration.
  3. The button shows "Saving…" while the request is in flight.
  4. A success toast confirms the save. The local overrides are cleared and the server becomes the source of truth.

If the save fails, an error toast appears and your local changes are preserved. Retry saving — do not refresh the page, as that will discard your unsaved edits.


Common issues

My changes disappeared after I navigated to another tab. Config changes are held in local browser state until you click Save. Always save before switching tabs.

The bot is not following my personality prompt. Check the Response Mode. If set to Creative, the model has more freedom to deviate. Try Balanced or Strict. Also check that the Description field is not empty — that is the primary instruction source.

Temperature is set to 0.7 but responses feel random. Temperature interacts with Response Mode. Creative mode raises the effective temperature above what the slider shows. Switch to Balanced and retest.

I need more than 500 characters in the Description. The UI limits Description to 500 characters. You can set a longer personalityPrompt (up to 10,000 characters) via the API using PATCH /api/bots/:botId.


Connection lost. Retrying...