Testing and Debugging
Effective testing and debugging are crucial for developing robust flows and workflows in Waterflai. This guide will walk you through the tools and techniques available for ensuring your AI applications perform as expected.
Live Chat Testing
The Mini Chat feature in the Dream Builder Editor allows you to test your flow in real-time.
Using Mini Chat
Opening Chat: Click the "Chat" button in the top right corner of the Dream Builder.
Interacting with Your Flow: Type messages into the chat interface to test your flow's responses.
Viewing Outputs: Observe how your flow processes inputs and generates outputs.

Tips for Effective Testing
Test with a variety of inputs, including edge cases.
Use the chat history to track conversation flow.
Pay attention to how your flow handles unexpected inputs.
Execution Detail Panel
The Execution Detail Panel provides in-depth information about how data flows through your nodes during testing.
Accessing the Execution Detail Panel
Click on the check/cross icon, on top-right of a node in your flow after running a test to view its execution details.

Understanding Execution Details
Input Data: See what data was passed into the node.
Output Data: View the results produced by the node.
Execution Time: Check how long each node took to process.
Errors: Identify any errors that occurred during execution.
Using Execution Details for Debugging
Trace the flow of data through your nodes to identify where issues might be occurring.
Compare expected vs. actual outputs at each step.
Look for bottlenecks in processing time.
Configuration Popover
The Configuration Popover allows you to adjust node settings on the fly for testing different scenarios.
Accessing the Configuration Popover
Click on a node in your flow to open its Configuration Popover.
Testing Different Configurations
Modify node parameters and immediately test the changes in Mini Chat.
Experiment with different model settings, prompts, or knowledge bases.
Use variable references to test how data flows between nodes.
Common Debugging Scenarios
1. Unexpected Outputs
Check the prompts and instructions in your LLM Model and Agent nodes.
Verify that the correct knowledge bases are being accessed in Knowledge Retrieval nodes.
Ensure that data is being correctly passed between nodes using variable references.
2. Error Messages
Review the Execution Detail Panel for the node where the error occurred.
Check node configurations for missing required fields or incorrect data types.
Verify API keys and permissions for external services.
3. Performance Issues
Look for nodes with long execution times in the Execution Detail Panel.
Consider optimizing large text inputs using Splitter nodes.
Review your use of API calls and consider caching strategies where appropriate.
4. Inconsistent Behavior
Test your flow multiple times with the same input to check for consistency.
Review any random elements in your flow (e.g., temperature settings in LLM nodes).
Check for race conditions in parallel executions.
Best Practices for Testing and Debugging
Incremental Testing: Test each node or small group of nodes before adding complexity.
Use Descriptive Node Labels: Clear labels make it easier to understand the flow during debugging.
Comment Your Flow: Add comments to complex sections to aid in troubleshooting.
Version Control: Save versions of your flow as you make significant changes.
By leveraging these testing and debugging tools and techniques, you can ensure that your Waterflai flows are robust, efficient, and produce the expected results. Remember that testing is an iterative process, and regular debugging will help you refine and improve your AI applications over time.
Last updated