Testing Mode
Test your AI agent’s responses and actions in real-time with OpenSesame’s Testing Mode.
4. Testing Mode
Testing Mode in OpenSesame allows you to evaluate your AI agent’s responses and actions in real time. This feature lets you interact with your agent by sending prompts one at a time, helping you refine its performance and improve accuracy based on immediate feedback.
Key Features
-
Real-Time Testing: Activate Testing Mode by clicking the test button on the upper right side of the entry box. This mode lets you input prompts or actions for your agent to process individually, allowing you to see each response in real-time.
-
Step-by-Step Evaluation: Testing Mode is designed to test actions individually, giving you a clear view of how your agent processes each prompt. This lets you make adjustments on the spot to improve accuracy or adjust the agent’s behaviour.
-
Memory: We’re added memory that will allow your agent to retain context across multiple prompts, so it remembers what your request was the previous time you sent in a request.
In-Depth Analysis
Each prompt sent in Testing Mode allows you to analyze and tweak your agent’s performance. This step-by-step approach is particularly useful for fine-tuning responses to align with specific objectives or to eliminate unwanted behaviours.
This Testing Mode feature provides the feedback loop needed to build more reliable and responsive agents, ensuring each interaction with your AI agent is accurate and contextually relevant.