LLM model Node
Overview
The LLM Model Node integrates Large Language Models into your flow, enabling natural language processing, generation, and understanding. It supports multiple model providers, temperature control, and a fallback mechanism to ensure reliability.
Usage cost: 1 credit
Configuration
Settings
Model Selection
Primary Model*: Select the main LLM model
Fallback Model: Optional backup model if primary fails
Temperature (0-1): Controls response randomness and creativity
Lower values (closer to 0): More focused, deterministic responses
Higher values (closer to 1): More creative, varied responses
Prompts
System Prompt: Instructions/context for the model's behavior
User Prompt: The main input to be processed
Past Message History: Optional chat history for context
Output Ports
response
(string): The model's generated response
Best Practices
Model Selection
Choose models based on your specific needs (cost, speed, capabilities)
Always configure a fallback model for critical flows
Temperature Settings
Use lower temperatures (0.1-0.3) for:
Factual responses
Structured output
Consistent results
Use higher temperatures (0.6-0.9) for:
Creative writing
Brainstorming
Conversational responses
Prompt Engineering
Keep system prompts clear and specific
Use variables in prompts to make them dynamic
Include relevant context in the prompt
Structure prompts with clear input/output expectations
Message History
Consider memory limitations of the model
Clean or truncate long conversation histories
Common Issues
High temperature settings may lead to inconsistent outputs
Missing or poorly formatted system prompts can result in unexpected responses
Token limits may be exceeded with long prompts or chat histories
Rate limiting may affect response times
Last updated