Advanced Knowledge Base: AI Settings
Configure advanced AI settings in Tiledesk to optimize AI Agent accuracy, tone, and response quality.

Tiledesk's AI settings provide powerful tools for fine-tuning the behavior and performance of your AI agent. These settings include the maximum number of tokens, temperature, chunks, system context, and prompt. This tutorial will explain each of these settings and how they impact your Knowledge Base.
Access AI Settings
Once logged in, select the Knowledge Base section (brain icon) from the left-hand menu.
Choose to either create a new Knowledge Base or use the default one.
Click the gear icon to access and configure the AI settings.

AI Settings Explained
1. AI Models
You can choose the following AI models from the drop-down menu: GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4o, GPT-4o Mini and etc. As you can see from the picture below, each model consumes a different amount of tokens.

Differences Among OpenAI Models
GPT-4.1 Definition: The latest generation in the GPT-4 family, offering the highest level of language understanding and text generation. Best suited for complex scenarios where accuracy and context are critical. Token consumption: Higher cost per token, but delivers top performance.
GPT-4.1 Mini Definition: A lighter, faster variant of GPT-4.1. It provides solid language capabilities at a much lower cost, making it perfect for frequent interactions or less complex tasks. Token consumption: Significantly lower, focused on efficiency and affordability.
GPT-4.1 Nano Definition: The most lightweight model in the GPT-4.1 series. Designed for simple tasks, quick replies, or when running costs must be kept minimal. Token consumption: The lowest among all models, offering maximum efficiency with limited capabilities.
GPT-4o Definition: A specialized version of GPT-4, optimized for certain tasks or industries, offering targeted performance improvements. Token Consumption: Typically higher than GPT-4 Turbo but lower than standard GPT-4, due to specific optimizations.
GPT-4o Mini Definition: A smaller, more efficient variant of GPT-4o, designed for applications requiring less computational power while still benefiting from targeted optimizations. Token Consumption: Lower cost per token compared to GPT-4o, aimed at providing a balance between performance and resource usage.
GPT-4.1
Latest generation of the GPT-4 family, with the highest accuracy and reasoning.
Higher cost, best performance
GPT-4.1 Mini
Lightweight and efficient GPT-4.1 variant, solid language skills at lower cost.
Much lower cost than GPT-4.1, great for frequent interactions
GPT-4.1 Nano
Ultra-light model for simple tasks and minimal costs.
Lowest cost, designed for maximum efficiency
GPT-4o
Specialized version of GPT-4 for targeted tasks
Very high cost (second only to GPT-4 legacy), best for rich multimodal applications
GPT-4o Mini
Smaller, efficient variant of GPT-4o
Much lower cost than GPT-4o, good balance between performance and efficiency
2. Maximum Number of Tokens
Definition: Tokens are the pieces of words that the AI uses to process and generate responses. The maximum number of tokens setting controls the length of the responses.
Impact: A higher token limit allows for longer, more detailed responses. A lower limit restricts the length, making responses shorter and potentially more concise.
How to Configure:
Locate the Maximum Number of Tokens setting in the AI Settings.
Enter the desired number of tokens, up to a maximum of 9999 (e.g., 100, 200, 500).

3. Temperature
Definition: Temperature controls the randomness of the AI's responses. A lower temperature makes the responses more deterministic and focused, while a higher temperature makes them more creative and varied.
Impact: Adjusting the temperature helps balance between consistency and creativity in responses.
How to Configure:
Locate the Temperature setting.
Set the temperature value (e.g., 0.2 for more focused responses, 0.8 for more creative responses).
4. Chunks
Definition: Chunks are portions of content that the AI uses to create responses. Breaking content into chunks helps the AI process information more efficiently.
Impact: Proper chunking ensures the AI can handle large documents or datasets without performance issues.
How to Configure:
Locate the Chunks setting.
Define the size of the chunks, from 1 to 10.
5. System Context
Definition: System context provides the AI with background information or guidelines that influence its behavior and responses. It sets the overall tone and direction for the AI.
Impact: A well-defined system context helps the AI understand its role and the type of responses it should generate.
How to Configure:
Locate the System Context setting.
Enter the context information, such as "You are a helpful customer support assistant."
AI Agent Fine-tuning & Prompt
Now that you've set your AI preferences, you can go even further and start drafting your prompt within the Flow. Please note that every AI Agent you create can have different AI settings if you so prefer.
Definition: The prompt is the initial instruction provided to the AI, guiding it on how to generate responses based on user input.
Impact: The prompt shapes the AI's initial understanding and response generation.
How to Configure:
Go to the Design Studio, and locate the Prompt field within any AI action (e.g. Ask Knowledge Base, ChatGPT Task, AI AI Assistant).
Enter a clear and concise prompt, based on what you'd like to achieve with a specific AI agent, as below.

Configuring AI Settings - Recap
Access AI Settings: Go to the gears icon for the AI Settings under the Knowledge Bases tab.
Adjust Maximum Number of Tokens: Set the desired token limit based on the complexity and length of responses you need.
Set Temperature: Choose a temperature value that balances creativity and consistency.
Define Chunks: Determine the appropriate chunk size to optimize performance.
Input System Context: Provide relevant background information or guidelines to shape the AI's behavior.
Bear in mind, whilst these are overall AI settings, you can still configure each AI agent's settings within the Design Studio according to your specific needs
Best Practices
Experiment with Settings: Adjust the settings incrementally and test the responses to find the optimal configuration for your use case.
Monitor Performance: Regularly review the AI's performance and make adjustments as needed to maintain the quality of responses.
Keep Context Relevant: Ensure that the system context and prompt are relevant to the specific Knowledge Base and user interactions.
Conclusion
Configuring the AI settings for your Knowledge Base on Tiledesk allows you to fine-tune the AI agent's behavior and response quality. By understanding and adjusting the maximum number of tokens, temperature, chunks, system context, and prompt, you can enhance the effectiveness and efficiency of your AI Agent.
Have any feedback for us? Send it to [email protected]
Last updated