Testing and Debugging
Last updated
Last updated
Effective testing and debugging are crucial for developing robust flows and workflows in Waterflai. This guide will walk you through the tools and techniques available for ensuring your AI applications perform as expected.
The Mini Chat feature in the Dream Builder Editor allows you to test your flow in real-time.
Opening Chat: Click the "Chat" button in the top right corner of the Dream Builder.
Interacting with Your Flow: Type messages into the chat interface to test your flow's responses.
Viewing Outputs: Observe how your flow processes inputs and generates outputs.
Test with a variety of inputs, including edge cases.
Use the chat history to track conversation flow.
Pay attention to how your flow handles unexpected inputs.
The Execution Detail Panel provides in-depth information about how data flows through your nodes during testing.
Click on the check/cross icon, on top-right of a node in your flow after running a test to view its execution details.
Input Data: See what data was passed into the node.
Output Data: View the results produced by the node.
Execution Time: Check how long each node took to process.
Errors: Identify any errors that occurred during execution.
Trace the flow of data through your nodes to identify where issues might be occurring.
Compare expected vs. actual outputs at each step.
Look for bottlenecks in processing time.
The Configuration Popover allows you to adjust node settings on the fly for testing different scenarios.
Click on a node in your flow to open its Configuration Popover.
Modify node parameters and immediately test the changes in Mini Chat.
Experiment with different model settings, prompts, or knowledge bases.
Use variable references to test how data flows between nodes.
Check the prompts and instructions in your LLM Model and Agent nodes.
Verify that the correct knowledge bases are being accessed in Knowledge Retrieval nodes.
Ensure that data is being correctly passed between nodes using variable references.
Review the Execution Detail Panel for the node where the error occurred.
Check node configurations for missing required fields or incorrect data types.
Verify API keys and permissions for external services.
Look for nodes with long execution times in the Execution Detail Panel.
Consider optimizing large text inputs using Splitter nodes.
Review your use of API calls and consider caching strategies where appropriate.
Test your flow multiple times with the same input to check for consistency.
Review any random elements in your flow (e.g., temperature settings in LLM nodes).
Check for race conditions in parallel executions.
Incremental Testing: Test each node or small group of nodes before adding complexity.
Use Descriptive Node Labels: Clear labels make it easier to understand the flow during debugging.
Comment Your Flow: Add comments to complex sections to aid in troubleshooting.
Version Control: Save versions of your flow as you make significant changes.
By leveraging these testing and debugging tools and techniques, you can ensure that your Waterflai flows are robust, efficient, and produce the expected results. Remember that testing is an iterative process, and regular debugging will help you refine and improve your AI applications over time.