Bot AI Configuration
Everything you need to shape how your bot thinks, speaks, and acts. All settings live in the Config tab of your bot's cockpit at Studio → Bots → [your bot] → Config.
What is this?
The Config tab is the AI brain settings panel. It controls the bot's identity, the language model it uses, how creative or strict it is, which safety guardrails are active, and how it decides when and how to reply across different channels. Changes saved here take effect on the next message the bot processes — no restart required.
Prerequisites
- A bot created via Studio → New Bot
- At least one channel connected in Platforms if you want to test response behavior live
How to open configuration
- Go to Studio in the sidebar.
- Click your bot's name from the Bots list.
- Click the Config tab in the cockpit tab bar.
- Make changes to any section.
- Click Save Configuration at the bottom of the page.
Tip: Changes are held locally in your browser until you click Save. If you navigate away without saving, your edits are discarded. The save button is disabled while a save is in progress.
Section 1: AI Personality
This section defines who the bot is. The fields here are injected into the system prompt that the language model receives with every request.
Identity fields
| Field | What it does | Limit |
|---|---|---|
| Name | Display name shown to users | 2–64 characters, required |
| Description | System prompt — the bot's core personality and instructions | 500 chars in UI (10,000 chars via API) |
| Expertise | Area of focus, injected as additional context after the description | 500 characters |
| Tone Guidance | Supplementary tone instructions appended to the personality | 500 characters |
The Description field maps to personalityPrompt in the API. Write it as a direct-address system prompt that defines the bot's role, domain knowledge, and behavioral rules. For example: "You are a Solana DeFi expert. You help users understand token mechanics and execute trades. You are concise and precise. Never speculate on price."
A strong Description is the single highest-leverage configuration change you can make. The Expertise and Tone fields refine and supplement it — they do not replace it.
Writing effective personality prompts
Keep the following structure in mind:
- Role — what the bot is ("You are a trading assistant...")
- Domain — what it knows ("You specialize in Solana DeFi and meme tokens...")
- Rules — what it must and must not do ("Always ask for confirmation before trading. Never recommend leverage products.")
- Format — how it should respond ("Keep answers under 3 sentences unless asked for detail.")
Section 2: LLM Settings
Controls the language model and its generation parameters.
Model selection
| Model | Context Window | Best for |
|---|---|---|
gpt-5-mini | 128K tokens | Default. Multi-turn conversations, complex strategy questions, long document analysis. |
gpt-5-nano | 32K tokens | Simple command bots, high-frequency low-latency use cases, cost-sensitive deployments. |
Choose gpt-5-mini unless you have a specific reason to use nano. The larger context window means the bot can hold more conversation history and more knowledge context simultaneously.
Warning: Switching models mid-deployment changes response quality and latency. If you have users actively chatting, schedule model changes during low-traffic periods.
Temperature
The temperature slider runs from 0.0 to 1.0 and controls how deterministic the bot's responses are.
| Value | Behavior | Use when |
|---|---|---|
| 0.0 – 0.3 | Near-deterministic, consistent | Price alerts, data lookups, command-style bots |
| 0.4 – 0.6 | Balanced | General trading assistants, most use cases |
| 0.7 (default) | Moderate creativity | Conversational bots, community engagement |
| 0.8 – 1.0 | High variability | Creative writing, brainstorming — rarely appropriate for trading bots |
Practical example: A bot set to 0.2 and asked "what is my SOL balance?" will give a terse, reliable answer every time. The same bot at 0.9 might phrase the answer differently each time and occasionally volunteer unrequested context. For trading commands, stay below 0.5. For community chat bots, 0.6–0.7 works well.
Max Tokens
The maximum number of tokens the model generates in a single response. Valid range: 100–4,096.
- Lower values (100–500): fast, concise answers; appropriate for command-style bots
- Mid range (500–1,500): balanced answers with some explanation; good default for most bots
- Higher values (1,500–4,096): detailed analysis, long-form strategy explanations
The context window (128K or 32K) is the total token budget for input + output. Max Tokens caps only the output portion. Setting it too low truncates answers mid-sentence.
Context Window display
Below the Max Tokens field, the interface shows the model's context window size. A tooltip explains that this is the maximum total tokens (input + output) the model can process in a single request. This is informational — you cannot configure it.
Section 3: Response Mode
Response Mode is a high-level preset that tunes how closely the bot follows its personality prompt versus exercising judgment.
| Mode | Behavior | Best for |
|---|---|---|
| Strict | Follows instructions precisely, minimal creativity, lower effective temperature | Trading execution bots, compliance-sensitive contexts |
| Balanced (default) | Mix of precision and natural language flow | Most general-purpose bots |
| Creative | More expressive and exploratory, higher effective temperature | Community bots, entertainment, brainstorming assistants |
Response Mode is not the same as Temperature — it adjusts several internal inference parameters at once, not just temperature. Strict mode produces shorter, more direct answers. Creative mode may produce longer, richer responses that deviate further from the exact prompt wording.
Section 4: Guardrails
Toggle switches that protect users and your portfolio from unsafe behavior. Guardrails are enforced at the execution layer — the bot cannot bypass them even if a user asks it to.
| Guardrail | Default | What it does |
|---|---|---|
| Require confirmation for trades | On | Bot asks for explicit approval before executing any trade |
| Block high-risk tokens | On | Rejects tokens flagged by RugCheck or honeypot detection |
| Enforce position size limits | On | Prevents individual trades from exceeding configured size thresholds |
| Allow experimental strategies | Off | Enables community-submitted or untested strategy templates |
| Enable natural language trading | On | Allows trade commands through conversational messages like "buy 0.5 SOL of BONK" |
Warning: Disabling "Require confirmation for trades" combined with "Enable natural language trading" means any user who can message your bot can potentially trigger trades. Only do this in fully trusted environments with strong access controls in Platforms.
Policy mode and fees
Each bot has a policyMode that determines its swap fee rate:
| policyMode | Swap fee | When to use |
|---|---|---|
standard | 1% per swap | Default for all bots |
enterprise | 0.5% per swap | High-volume trading use cases |
The policy mode is set at the account level, not in the Config UI — contact support if you need enterprise rate.
Section 5: Response Settings
Fine-grained controls for when and how the bot replies across different channel types.
DM Trigger and Group Trigger
Each channel context (direct messages vs. group chats) has its own trigger policy:
| Trigger | Behavior |
|---|---|
always | Replies to every message |
mention | Only replies when the bot is mentioned by name or @handle |
command | Only replies when the message starts with a command prefix |
question | Only replies to messages that end with a question mark or are phrased as questions |
keyword | Only replies when the message contains a configured keyword |
never | Bot never replies in this context |
The most common setup is DM Trigger: always (users expect direct replies in DMs) and Group Trigger: mention (reduces noise in busy group chats).
Personalization Level
Controls how much the bot adapts to individual user history:
| Level | Behavior |
|---|---|
| None | Uniform responses for all users |
| Low | Minimal adaptation — remembers a user's preferred tokens |
| Medium (default) | Adapts tone and depth to user conversation history |
| High | Full personalization — uses conversation patterns to tailor language, examples, and depth |
Higher personalization uses more context tokens per request. If you are constrained on Max Tokens, keep this at Low or None.
Context Depth
How many previous messages are included in the prompt when generating a reply. Valid range: 1–50.
A depth of 10 (the default) means the bot remembers the last 10 exchanges. Increase this for complex multi-turn conversations like strategy discussions. Decrease it for high-frequency simple command bots where context is not useful and adds latency.
Rate Limit
Maximum bot replies per minute across all users combined. Set to 0 for unlimited. Use this to control costs on high-traffic bots or to prevent spam abuse in public Telegram groups.
Conversation Timeout
Minutes of inactivity before a user's conversation context is reset. Set to 0 for no timeout. After a timeout, the next message from that user starts a fresh conversation — previous messages are no longer in context. A value of 30–60 minutes is appropriate for most trading bots.
Quick-Reply Templates
Quick-reply templates appear as tappable buttons in Telegram and Discord, giving users one-tap shortcuts to common actions. A maximum of 8 templates per bot.
Field limits
| Field | Limit |
|---|---|
| Label | 50 characters (the button text) |
| Message | 200 characters (sent when the button is tapped) |
| Icon | Optional emoji |
| Order | Integer — determines button display order |
Example JSON
Configure templates via the API or import them in the Config UI:
{
"quickReplyTemplates": [
{
"label": "Portfolio",
"message": "Show me my current portfolio",
"icon": "📊",
"order": 1
},
{
"label": "Buy SOL",
"message": "Buy 0.5 SOL worth of the top trending token",
"icon": "🚀",
"order": 2
},
{
"label": "PnL",
"message": "What is my PnL today?",
"icon": "📈",
"order": 3
}
]
}Keep labels short and action-oriented. The best templates are ones a user would type anyway — you are just removing friction, not adding new commands.
Proactive Learning (Training tab)
The Training tab at /studio/bots/[botId]/training and its sub-route at /training/proactive enable the bot to improve itself from real conversation data.
How proactive learning works
- The system scans conversation history within the configured time window.
- It identifies patterns: questions the bot answered poorly, topics mentioned frequently, user vocabulary preferences.
- It generates improvement suggestions in four categories: topic gaps (new knowledge to add), preference adaptations (tone adjustments), question patterns (new Q&A pairs), and vocabulary updates (personality prompt additions).
- Each suggestion appears as a card you can approve, modify, or reject.
- Approved suggestions are automatically applied to the knowledge base or personality prompt.
Settings
| Setting | Default | Notes |
|---|---|---|
| Enabled | Off | Must be explicitly turned on |
| Aggressiveness | Balanced | cautious = high confidence required; balanced = moderate threshold; aggressive = lower threshold, more suggestions |
| Time window | 30 days | How far back conversation history is scanned |
| Min messages | 50 | Minimum message count before learning analysis runs |
| Auto-Apply | Off | When on, approved-category suggestions apply without your review |
Warning: Auto-Apply with Aggressive mode can make unexpected changes to your bot's knowledge base. Start with Cautious + Auto-Apply off until you understand the types of suggestions the system generates for your bot.
Insight categories
| Category | What it adds |
|---|---|
| Topic | New documents or Q&A pairs for frequently asked questions the bot could not answer well |
| Preference | Adjustments to tone and response style based on user feedback signals |
| Question | New Q&A pairs derived from recurring question patterns |
| Vocabulary | New phrases appended to the personality prompt to align with your users' language |
Saving configuration
After making changes in any section:
- Scroll to the bottom of the Config tab.
- Click Save Configuration.
- The button shows "Saving…" while the request is in flight.
- A success toast confirms the save. The local overrides are cleared and the server becomes the source of truth.
If the save fails, an error toast appears and your local changes are preserved. Retry saving — do not refresh the page, as that will discard your unsaved edits.
Common issues
My changes disappeared after I navigated to another tab. Config changes are held in local browser state until you click Save. Always save before switching tabs.
The bot is not following my personality prompt. Check the Response Mode. If set to Creative, the model has more freedom to deviate. Try Balanced or Strict. Also check that the Description field is not empty — that is the primary instruction source.
Temperature is set to 0.7 but responses feel random. Temperature interacts with Response Mode. Creative mode raises the effective temperature above what the slider shows. Switch to Balanced and retest.
I need more than 500 characters in the Description.
The UI limits Description to 500 characters. You can set a longer personalityPrompt (up to 10,000 characters) via the API using PATCH /api/bots/:botId.