Introducing Tool Calling Node: Multi-tool use with automatic schema, loop logic and context tracking.
Author
David Vargas
Jul 16, 2025
Product Updates
No items found.
Every function calling setup demands schema definition, input/output handling, and orchestration of repeated invocations until a valid termination condition is met.
While this level of control can be necessary in specific scenarios, the underlying pattern tends to repeat. Across teams and workflows, the same scaffolding is rebuilt, resulting in duplicated effort, slower iteration, and increased maintenance overhead.
While we still support manual function calling for full control, many customers have asked for an out-of-the-box solution to standardize tool use without the overhead.
Introducing the Tool Calling Node: faster, cleaner way to handle function calling in Vellum Workflows.
You can now streamline function calls with automatic handling of OpenAPI schemas, loop logic, and function call output parsing.
🔗 Sign up to Vellum to try it out today. Keep on reading to learn how it works.
Key capabilities
The Tool Calling Node provides several advantages over manual function calling implementation:
Automatic Schema Generation: No need to manually define OpenAPI schemas for your tools
Built-in Loop Logic: Automatically handles the iterative calling pattern until a text response is received
Output Parsing: Automatically parses function call outputs without manual intervention
Multiple Tool Support: Configure multiple tools within a single node
By GA’ing the Tool Calling Node we’ve handled the common orchestration logic around it in a first-class way. Now, your engineers no longer have to rebuild the same patterns, and non-technical teams can reuse pre-built tools out of the box.
How it works
The Tool Calling Node follows this execution logic:
Initial Prompt: The configured prompt is sent to the selected model with the available tools
Tool Decision: The model decides whether to call a tool or provide a text response
Tool Execution: If a tool is called, the node executes the appropriate tool type
Result Integration: The tool result is added to the chat history
Iteration: Steps 2-4 repeat until the model provides a final text response
Output: Both the complete chat history and final text response are made available to downstream nodes.
Debugging the Tool Calling Node
One of the supported outputs for this node is chat history. This output contains the accumulated list of messages that the Prompt managed during execution. This includes all user, assistant, function call and function result messages.
While this output is helpful in maintaining conversation context in multi-turn conversations, it’s highly effective in debugging the tool calling sequence as you build.
In production, however, you’re able to utilize our Tracing view, and analyze all execution details of the Tool Calling Node:
Type of tools that can be invoked
The Tool Calling Node supports three types of tools, each designed to support different parts of your workflow logic. All tool types benefit from automatic schema generation and tight integration with the LLM's reasoning loop.
Workflow Interface
Run custom Python or TypeScript code directly within your workflow. Vellum automatically infers the input/output schema from your function signature, so there’s no need to write OpenAPI specs or manual interface definitions.
Use Cases:
Transforming or cleaning input/output data
Making external API calls from within the workflow
Performing mathematical or logical operations
Implementing lightweight business rules on the fly
Define and invoke nested workflows inside your main workflow. This allows you to break down complex logic into modular, reusable pieces without needing to deploy or version them separately.
Use Cases:
Structuring large workflows into manageable, testable units
Reusing patterns across branches of logic
Composing step-by-step agents from smaller functional blocks
Invoke fully deployed workflows as tools within your current flow. These are versioned and managed independently, making them ideal for production-grade reuse across teams or applications.
Ensuring tool behavior stays consistent across versions with release control
Recommended usage
While there are many resources out there on how to best prompt these tools, and how to optimize the performance when you want the prompt to use multiple tools, we wanted to highlight a bit more specific set of best practices that could help you get started faster.
Isolated Tabs
Keep individual tools focused on specific tasks
Provide clear, descriptive names for your tools
Include comprehensive docstrings or descriptions for better model understanding
Clearly describe when each tool should be used
Provide examples of appropriate tool usage
Include instructions for when to stop calling tools and provide a final response
Consider using Node Adornments like Try or Retry for robust error handling
Test your tools thoroughly before deploying to production
Monitor tool execution for unexpected behaviors
Set appropriate Max Prompt Iterations to balance functionality and performance
Consider the computational cost of each tool when designing your workflow
Use caching strategies where appropriate for expensive operations
What’s next
We have a lot planned for this node. We know how important it is to debug the inner workings of an abstracted component, so we’re prioritizing more granular debugging support. You'll be able to inspect each intermediate step the tool takes under the hood to understand how it's working.
We're also planning to build a tool directory, starting with support for shared Vellum tools.
We’re excited to keep building, and can’t wait to see what you do with it.
Every function calling setup demands schema definition, input/output handling, and orchestration of repeated invocations until a valid termination condition is met.
While this level of control can be necessary in specific scenarios, the underlying pattern tends to repeat. Across teams and workflows, the same scaffolding is rebuilt, resulting in duplicated effort, slower iteration, and increased maintenance overhead.
While we still support manual function calling for full control, many customers have asked for an out-of-the-box solution to standardize tool use without the overhead.
Introducing the Tool Calling Node: faster, cleaner way to handle function calling in Vellum Workflows.
You can now streamline function calls with automatic handling of OpenAPI schemas, loop logic, and function call output parsing.
🔗 Sign up to Vellum to try it out today. Keep on reading to learn how it works.
Key capabilities
The Tool Calling Node provides several advantages over manual function calling implementation:
Automatic Schema Generation: No need to manually define OpenAPI schemas for your tools
Built-in Loop Logic: Automatically handles the iterative calling pattern until a text response is received
Output Parsing: Automatically parses function call outputs without manual intervention
Multiple Tool Support: Configure multiple tools within a single node
By GA’ing the Tool Calling Node we’ve handled the common orchestration logic around it in a first-class way. Now, your engineers no longer have to rebuild the same patterns, and non-technical teams can reuse pre-built tools out of the box.
How it works
The Tool Calling Node follows this execution logic:
Initial Prompt: The configured prompt is sent to the selected model with the available tools
Tool Decision: The model decides whether to call a tool or provide a text response
Tool Execution: If a tool is called, the node executes the appropriate tool type
Result Integration: The tool result is added to the chat history
Iteration: Steps 2-4 repeat until the model provides a final text response
Output: Both the complete chat history and final text response are made available to downstream nodes.
Debugging the Tool Calling Node
One of the supported outputs for this node is chat history. This output contains the accumulated list of messages that the Prompt managed during execution. This includes all user, assistant, function call and function result messages.
While this output is helpful in maintaining conversation context in multi-turn conversations, it’s highly effective in debugging the tool calling sequence as you build.
In production, however, you’re able to utilize our Tracing view, and analyze all execution details of the Tool Calling Node:
Type of tools that can be invoked
The Tool Calling Node supports three types of tools, each designed to support different parts of your workflow logic. All tool types benefit from automatic schema generation and tight integration with the LLM's reasoning loop.
Workflow Interface
Run custom Python or TypeScript code directly within your workflow. Vellum automatically infers the input/output schema from your function signature, so there’s no need to write OpenAPI specs or manual interface definitions.
Use Cases:
Transforming or cleaning input/output data
Making external API calls from within the workflow
Performing mathematical or logical operations
Implementing lightweight business rules on the fly
Define and invoke nested workflows inside your main workflow. This allows you to break down complex logic into modular, reusable pieces without needing to deploy or version them separately.
Use Cases:
Structuring large workflows into manageable, testable units
Reusing patterns across branches of logic
Composing step-by-step agents from smaller functional blocks
Invoke fully deployed workflows as tools within your current flow. These are versioned and managed independently, making them ideal for production-grade reuse across teams or applications.
Ensuring tool behavior stays consistent across versions with release control
Recommended usage
While there are many resources out there on how to best prompt these tools, and how to optimize the performance when you want the prompt to use multiple tools, we wanted to highlight a bit more specific set of best practices that could help you get started faster.
Isolated Tabs
Keep individual tools focused on specific tasks
Provide clear, descriptive names for your tools
Include comprehensive docstrings or descriptions for better model understanding
Clearly describe when each tool should be used
Provide examples of appropriate tool usage
Include instructions for when to stop calling tools and provide a final response
Consider using Node Adornments like Try or Retry for robust error handling
Test your tools thoroughly before deploying to production
Monitor tool execution for unexpected behaviors
Set appropriate Max Prompt Iterations to balance functionality and performance
Consider the computational cost of each tool when designing your workflow
Use caching strategies where appropriate for expensive operations
What’s next
We have a lot planned for this node. We know how important it is to debug the inner workings of an abstracted component, so we’re prioritizing more granular debugging support. You'll be able to inspect each intermediate step the tool takes under the hood to understand how it's working.
We're also planning to build a tool directory, starting with support for shared Vellum tools.
We’re excited to keep building, and can’t wait to see what you do with it.
A Full-Stack Founding Engineer at Vellum, David Vargas is an MIT graduate (2017) with experience at a Series C startup and as an independent open-source engineer. He built tools for thought through his company, SamePage, and now focuses on shaping the next era of AI-driven tools for thought at Vellum.
Build AI Products Faster: Top Development Platforms Compared
The Best AI Tips — Direct To Your Inbox
Latest AI news, tips, and techniques
Specific tips for Your AI use cases
No spam
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.
Marina Trajkovska
Head of Engineering
This is just a great newsletter. The content is so helpful, even when I’m busy I read them.
Jeremy Hicks
Solutions Architect
Experiment, Evaluate, Deploy, Repeat.
AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.