LLM Basics: Ollama Function Calling

In our previous post, we introduced function calling and learned how to do it with OpenAI’s LLMs.
In this post, we’ll call the same cactify_name function from that post using Meta’s
Llama 3.2 model, installed locally using Ollama. The techniques in this post should also work
with other Ollama models that support function-calling.
If you haven’t done so already, install Ollama on your computer and download the model with ollama pull llama3.2.
As with the previous posts, the code for this walkthrough is available on GitHub.
Defining a function schema
We’ll define our function using JSON Schema format. The schema is similar to what we defined in the OpenAI post, with slight differences:
{
"type": "function",
"function": {
"name": "cactify_name",
"description": "Transforms a name into a fun, cactus-themed version.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name to be cactified."
}
},
"required": ["name"]
}
}
}
Using curl
Ollama’s API is available at http://localhost:11434 by default. Let’s use curl to make a request to the /api/chat/ endpoint and see the raw JSON response from the model. We’ll provide our function schema in the tools parameter.
curl http://localhost:11434/api/chat -s -d '{
"model": "llama3.2",
"messages": [
{
"role": "user",
"content": "What would my name, Colin, be if it were cactus-ified?"
}
],
"stream": false,
"tools": [
{
"type": "function",
"function": {
"name": "cactify_name",
"description": "Transforms a name into a fun, cactus-themed version.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name to be cactified."
}
},
"required": ["name"]
}
}
}
]
}'
This returns a response that includes a message object with a tool call:
{
"model": "llama3.2",
"created_at": "2025-12-01T17:03:15.154284448Z",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"function": {
"name": "cactify_name",
"arguments": {
"name": "Colin"
}
}
}
]
},
"done_reason": "stop",
"done": true,
"total_duration": 1704955272,
"load_duration": 85707654,
"prompt_eval_count": 184,
"prompt_eval_duration": 80778398,
"eval_count": 20,
"eval_duration": 1537612247
}
We see that the model decided to make a function call. As with the OpenAI example, the model simply tells us which function it would like to call and with which arguments. It’s up to us to handle the function execution and pass back the result.
Using the ollama Python package
We can make the same request using the ollama Python package:
import ollama
input_list = [{
"role": "user",
"content": "What would my name, Colin, be if it were cactus-ified?"
}]
cactify_name_schema = {
"type": "function",
"function": {
"name": "cactify_name",
"description": "Transforms a name into a fun, cactus-themed version.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name to be cactified."
}
},
"required": ["name"],
},
},
}
tools = [cactify_name_schema]
response = ollama.chat(
"llama3.2",
messages=input_list,
tools=tools,
)
If we inspect the response, we see that the model returned a message object indicating it wants to use the tool we defined:
ChatResponse(
# ...
message=Message(
role='assistant',
content='',
thinking=None,
images=None,
tool_name=None,
tool_calls=[ToolCall(function=Function(name='cactify_name', arguments={'name': 'Colin'}))]
)
)
We can also pass the actual Python function in the tools argument and Ollama will generate the schema for us in the background. This makes it easy to use an existing function as a tool. For best results, it’s recommended to provide type annotations for parameters and return values, and add a Google-style docstring. We’ve already done this for our cactify_name function.
response = ollama.chat(
"llama3.2",
messages=input_list,
tools=[cactify_name],
)
Executing the function
Because the model only constructs the call, we have to execute the logic ourselves.
function_call = response.message.tool_calls[0].function
# The arguments are already a Python dict, not JSON
result = cactify_name(**function_call.arguments)
print(f"Result: {result}")
Output:
Result: Colinactus
Then we’ll feed back the result to the model to get a final response.
# Add the model's response to the input_list first, for conversation history
input_list.append(response.message)
input_list.append({"role": "tool", "content": result})
final_response = ollama.chat(
"llama3.2",
messages=input_list,
tools=tools,
)
print(final_response.message.content)
Output:
The cactus-ification of your name is: Colincactus.
Automating the workflow
Let’s create a function that automates the whole process of prompting, detecting a function call, executing the function call, and sending its result back to the model.
ollama_messages = []
def prompt(user_input: str) -> str:
"""Prompt the model with the user input."""
# Add the user input to the conversation history
ollama_messages.append({"role": "user", "content": user_input})
# Prompt the model with the user input
response = ollama.chat(
"llama3.2",
messages=ollama_messages,
tools=tools,
)
if response.message.tool_calls:
# There's a request from the model to use one or more tools
ollama_messages.append(response.message)
for tool_call in response.message.tool_calls:
# Execute the function based on its name
if tool_call.function.name == "cactify_name":
result = cactify_name(**tool_call.function.arguments)
# Add the function call output to the messages list
ollama_messages.append(
{"role": "tool", "content": result, "tool_name": "cactify_name"}
)
# Now feed the function result back to the model
final_response = ollama.chat("llama3.2", messages=ollama_messages, tools=tools)
ollama_messages.append(final_response.message)
return final_response.message.content
return response.message.content
Now we can use this function to interact with the model and have it call our function as needed:
print(prompt("What would my name, Colin, be if it were cactus-ified?"))
# Output: Based on the tool call response, I've formed an answer to your original question: If Colin's name were cactus-ified, it would be Colinactus.
print(prompt("What about Simon?"))
# Output: Based on the tool call response, I've formed an answer to your original question: If Simon's name were cactus-ified, it would be Simonactus.
print(prompt("What names did I ask about?"))
# Output: Based on our conversation, you asked about cactifying the names Colin and Simon. The resulting names are Colinactus and Simonactus, respectively.
Conclusion
We’ve seen that function calling with a local Ollama model is similar to function calling with OpenAI models, even though we used different Python packages.
Next up, we’ll explore creating and running AI agents.