Ollama

2025


cover.png

PydanticAI Agents Intro

In previous posts, we explored function calling and how it enables models to interact with external tools. However, manually defining schemas and managing the request/response loop can get tedious as an application grows. Agent frameworks can help here.

cover.png

LLM Basics: Ollama Function Calling

In our previous post, we introduced function calling and learned how to do it with OpenAI’s LLMs. In this post, we’ll call the same cactify_name function from that post using Meta’s Llama 3.2 model, installed locally using Ollama. The techniques in this post should also work with other Ollama models that support function-calling.

cover.png

Learning LLM Basics with Ollama

We have recently started learning about LLMs and how we can integrate them into our development projects, and we will be documenting our learning in a series of blog posts. In our first installment of the series, we learned about the OpenAI API. In this one, we will experiment with Ollama, which is an open-source application that allows you to download, run, and interact with LLMs on your own hardware. By running models locally, you maintain complete control over your data and can use LLMs without an internet connection. It also allows you to easily experiment with different models.